Diffusion Models, Conditional Diffusion Models

Generative Models, Deep Representation Learning, E9-333, ADRL, IISc, 2022

Diffusion Models take data from a distribution, gradually adds gaussian noise, until a map to an isotropic gaussian is obtained. For small mixing parameters, the reverse process is also Markov. This assumption helps us come up with a model that can leran the backward process, i.e., given isotropic gaussian noise, it can run steps of langevin dynamics (backward/also known as denoising) to generate images from the train distribution.

You can find the code here.