On November 21, 2025, the DeepILIA team proudly celebrates a major milestone: Jean-Sébastien Lerat successfully defended his PhD thesis at UMONS. His work tackles a challenge that is becoming central to modern AI systems: how to train and deploy deep learning models efficiently across heterogeneous computing infrastructures, from single machines to multi-node clusters, cloud resources, and edge/industrial environments.

Jean-Sébastien’s thesis introduces Auto-DIST, a framework aimed at simplifying and automating distributed AI workflows for computer vision and Industry 4.0. The goal is clear: make high-performance deep learning scalable, cost-aware, and easier to deploy in real industrial settings where resources can be limited, diverse, or dynamically allocated.

A key foundation of this thesis is Jean-Sébastien’s systematic study of how deep learning frameworks behave locally before scaling them globally. In his comparative work on single-node training, he analyzed major frameworks (such as PyTorch, TensorFlow, MXNet, Paddle, and SINGA) under different CNN complexities and dataset sizes, tracking not only execution time but also CPU, RAM, and GPU/CUDA utilization. This study highlighted that local performance strategies strongly affect end-to-end efficiency, and that some frameworks manage GPU workloads far more effectively than others depending on the use case. (Paper : Single node deep learning frameworks: Comparative study and CPU/GPU performance analysis https://orbi.umons.ac.be/bitstream/20.500.12907/48131/1/Lerat.pdf)
Building on this, his research moved naturally to multi-node distributed deep learning. He proposed an empirical approach where local parallelism (multi-threading / multi-processing) is optimized first, and then used as a leverage point to speed up distributed training. Experiments showed that this local-first strategy significantly impacts global speedup, and can even outperform widely used baselines such as Horovod in some settings. (Paper : Distributed Deep Learning: From Single-Nodeto Multi-Node Architecture https://orbi.umons.ac.be/bitstream/20.500.12907/42655/1/_electronics-11-01525.pdf)
Jean-Sébastien also focused on the real constraints of Industry 4.0 deployments, especially the need to scale training without requiring expensive GPU clusters. In his CloudTech work, he proposed an architecture that uses on-demand cloud virtual machines and lightweight Docker containers to distribute deep learning training over CPUs, reducing deployment and infrastructure costs while keeping training scalable. The approach relies on minimal container images (only the required software stack) to accelerate distribution and replication across VMs and edge nodes. (Paper: Architecture to Distribute Deep Learning Models on Containers and Virtual Machines for Industry 4.0 https://orbi.umons.ac.be/bitstream/20.500.12907/47981/1/_CloudTech23_paper_8309.pdf )
The team warmly congratulates Jean-Sébastien on this achievement and looks forward to seeing Auto-DIST and its ideas inspire future edge-to-cloud AI deployments in computer vision and industrial environments. 🎓✨