
















PhD defence : Kaifeng Zou
Team: IMAGeS
Date & time : Thursday, September 28th, at 14:00 at ICube, Illkirch (Room A207)
Title : "Advancements in Generative Models: Enhancing Interpretability and Control of Complex Data through Disentanglement and Conditional Generation."
Abstract : Generative models are a class of machine learning models that aim to learn the underlying distribution of a given dataset and generate new data points that resemble the original data. These models have gained significant attention in recent years due to their ability to produce realistic and diverse samples of data. Generative models, such as VAEs ( Variational Autoencoders) , GANs (Generative Adversarial Networks), EBMs (Energy-Based Models), diffusion models, have shown significant promise in many fields, including image generation, speech synthesis, and natural language processing, and continue to be an active area of research, with new models and techniques being developed to improve their performance and broaden their applications. One of the most important application of generative model is disentangled representation, which refers to a type of feature learning in which the underlying factors or attributes of data are learned and represented independently. In our research, we utilize disentangled representations to tackle the challenge of sex determination and provide insights into the classification results. This is achieved by generating hip bones for the same individual from both sexes and subsequently conducting a comparison to identify sex-related distinctions. Additionally, we aim to acquire knowledge about the high-level factor and its attributes by learning the associated representation, allowing us to effectively control label-related characteristics. To achieve this, we introduce two innovative VAE frameworks aimed at learning the label-associated representation and enhancing VAE's generation quality simultaneously. Additionally, our research also makes a contribution to conditional generation. We apply a diffusion model to sequential data, showcasing its ability to generate 3D facial expressions, which involve time series data. This reverse process provides remarkable flexibility, enabling various types of conditioning and generation through a single training procedure.
The jury will be composed of:
Lors du congrès annuel du CIRSE 2025, organisé du 13 au 17 septembre à Barcelone en Espagne, le...
Le 5 février 2026, les partenaires du projet Interreg 2PhaseEx se sont réunis à la Manufacture des...
La réunion de mi-parcours du projet Interreg IMAGINE-STIM s’est tenue le 29 janvier. Elle a permis...
Les vendredi 30 et samedi 31 janvier, à Schirmeck, le festival Alsascience, organisé par le Jardin...
Après un parcours en biologie et en neurosciences, Maria Fiori a choisi de s’engager dans la...
Le 16 janvier 2026, l’Université de Strasbourg et Inria ont signé une convention cadre pour...
La nouvelle année débute avec le lancement de quatre nouveaux projets Interreg auxquels le...
Lors du congrès annuel du CIRSE 2025, organisé du 13 au 17 septembre à Barcelone en Espagne, le...
Madame Amonet Bazam Ouoba Nebie, doctorante au 2iE-Institut International d'Ingénierie de l'Eau et...
Lucas Striegel est maître de conférences à ICube au sein de l'équipe génie civil et énergétique et...