- Discriminative Model: Learns the conditional probability . It focuses on predicting a label given the input data .
- Generative Model: Learns the probability distribution of the data itself, .
- Conditional Generative Model: Learns the probability , which is the likelihood of observing specific data given a certain label .
The Density Functions :
- Likelihood: A density function assigns a positive number to each possible ; a higher number indicates that the specific value of is more likely.
- Normalization: Density functions must be normalized so that the total probability across all possible values of equals 1, expressed as:
- Competition: Because the total area must equal 1, different values of “compete” for density within the distribution.
With Generative Models, all possible images compete for probability mass. Model can “reject” unreasonable inputs by giving them small probability mass.
The Bayes’ Theorem:
This means with two of the three models, we can get the third one.
Autoregressive Models
Maximum Likelihood Estimation (MLE) The core objective is to find the best parameters () for our function so that it accurately models the true distribution of our data.
To train the model using a dataset , we solve for the optimal weights through these steps:
We start by trying to maximize the joint probability of all training data points. This is represented as the product of individual probabilities: .
Since multiplying many small probabilities can lead to numerical instability (underflow), we apply a logarithm. Because the log is a monotonic function, maximizing the log-likelihood is equivalent to maximizing the likelihood, but it allows us to swap the product for a sum: .
By substituting our model function for , we arrive at our final objective: .
Autoregressive models assume that the data is not a single static point, but a sequence of components:
An autoregressive model predicts the “next” part of the data based on what has already been generated.
It iterates through the elements, determining the probability of the next element given all previous elements.
To calculate the probability of the entire sequence , these models decompose the joint probability using the chain rule. Instead of looking at the whole sequence at once, the model breaks it down into a series of conditional probabilities:
Variational Autoencoders (VAE)
Variational Autoencoders (VAE) define an intractable density that we cannot explicitly compute or optimize. We can determine its lower bound.
The autoencoder is an unsupervised method for learning to extract features from inputs , without labels.
It trains an encoder and a decoder, which gets the intermediate representation of the input and then reconstructs the input data back. The intermediate representation has lower dimensionality.
The loss function is the L2 distance between input and reconstructed data.
After training, we can use encoder for downstream tasks.
For generative tasks, we can force all to come from a known distribution so that we can sample to get generate outputs.
We assume the latent factor conforms to Gaussian distribution. We train the model with maximum likelihood.
We apply Bayes’ Rules. Since we cannot get the posterior , we can train another network that learns .
For VAE, we train two networks.
The Encoder Network (): takes input data and maps it to a distribution over latent codes . Instead of a single point, it outputs the parameters—mean () and variance ()—of a distribution (typically Gaussian).
The Decoder Network (): takes a latent code and reconstructs the data . It outputs a distribution over the data, defined by its own mean () and variance ().
The fundamental challenge in VAEs is that the true posterior is intractable. To solve this, VAEs use Variational Inference:
- Approximation: We ensure that our learned encoder distribution is approximately equal to the true posterior .
- Estimation: If this approximation holds, we can estimate the data likelihood using the following relationship:
- Joint Training: We jointly train both the encoder and decoder to maximize this evidence.
ELBO
Building on the previous concepts, this derivation explains how we can optimize the log-likelihood of our data, , even when the true posterior is unknown.
The derivation uses the properties of logarithms and expectations to decompose the log-likelihood into three distinct terms:
-
The Reconstruction Term ():
- This measures how well the decoder can reconstruct the original input from the latent code sampled from the encoder.
- In training, we want to maximize this to ensure the model retains data fidelity.
-
The Prior Regularization Term ():
- This is the Kullback-Leibler (KL) divergence between the encoder’s distribution and a simple prior (usually a standard normal distribution).
- It acts as a regularizer, forcing the latent space to be continuous and well-structured. We subtract this term, meaning we want the encoder to stay close to our prior.
-
The Approximation Error ():
- This measures the divergence between our approximate posterior (the encoder) and the true, intractable posterior.
- Because KL divergence is always , we know that the first two terms combined form a lower bound on the total log-likelihood.
Since we cannot calculate the true posterior (the third term), we ignore it and maximize the first two terms—the Evidence Lower Bound (ELBO). By maximizing the ELBO, we simultaneously improve the quality of our data generation and the efficiency of our latent representation.
This is the lower bound of our maximal likelihood.
To train this, we first run input data through encoder to get distribution over , which should match the unit Gaussian. We sample from the output distribution and run it through decoder to get predicted data mean. Predicted mean should match x in L2.
Reconstruction loss wants and to be unique for each x, so decoder can deterministically reconstruct x.
Prior loss wants and so encoder output is always a unit Gaussian.
Generative Adversarial Networks
We don’t model any more. We only draw samples from it.
Generative Adversarial Networks: Have data drawn from distribution . Want to sample from
We introduce a latent variable z with simple prior (e.g. unit Gaussian). Sample and pass to a Generator Network . Then x is a sample from the Generator distribution .
Train Generator Network G to convert z into fake data x sampled from , fooling the discriminator D.
Train Discriminator Network D to classify data as real or fake
We jointly train G and D with a minimax game, hoping to converge to .
Generator wants . Train generator to minimize and discriminator to maximize so generator gets strong gradients at start.
Examples of GAN include DC-GAN and StyleGAN.
Latent space is smooth. Given latent vectors z0 and z1, we can interpolate between them and have the resulting image interpolating smoothly between samples.
However, the training of GAN is unstable.
Diffusion & Flow Matching
We model the transformation between a simple noise distribution and the complex data distribution.
Flow Matching: Have data drawn from distribution and noise drawn from a simple prior (e.g. unit Gaussian). We define a path that interpolates between them over time .
On each training iteration, we sample , , and a time step . We construct a noisy sample and a target velocity :
Train a Velocity Network to predict the vector that points from the data toward the noise. This is optimized using a regression loss:
During Inference (Sampling), we start with pure noise and move backward toward the data distribution. We choose a number of steps (often ):
For in :
- Evaluate the predicted velocity:
- Take a small step:
This approach, known as Optimal Transport Flow Matching, results in straight trajectories between noise and data.
Classifier-Free Guidance (CFG): To control the generation process, we introduce a condition (e.g., a text prompt). During training, we occasionally drop so the model learns both conditional and unconditional distributions.
During sampling, we compute two separate velocities for a noisy sample :
- Unconditional velocity: (points toward )
- Conditional velocity: (points toward )
We combine these using a guidance weight to shift the direction more strongly toward the condition:
Increasing improves how well the image matches the prompt but can reduce sample diversity.
Noise Schedules: We may use a non-uniform noise schedule. The common choice is logit-normal sampling. For high-res data, we often shift to higher noise to account for pixel correlations.
Diffusion Distillation: We use distillation algorithms reduce the number of steps (sometimes all the way to 1).
Examples of these models include Stable Diffusion (which uses a similar diffusion objective) and Flux.
Unlike GANs, these models are much more stable to train and scale better to high-resolution data. However, sampling is typically slower because it requires multiple evaluations of the network.
Latent Diffusion Models (LDM)
Latent Diffusion Models are basically VAE + GAN + Diffusion.
We train encoder + decoder to convert images to latents. Then train diffusion model to remove noise from latents. We run decoder to get image from denoised latents.
The encoder + decoder is a VAE. For the decoder, we use GAN to prevent the outputs from being blurry.
Generalized Diffusion
This framework provides a unified view of generative modeling, where models like DDPM, DDIM, and Flow Matching are seen as specific configurations of a general training objective.
Core Concept: We define a forward process that transitions from clean data to noise over a continuous or discrete time , and train a network to reverse or predict aspects of this transition.
1. Training Procedure
For every training step, we sample the components of the transition:
- Samples: (clean data), (standard noise), and (time step).
- Noisy Sample Construction: We create an intermediate state using time-dependent scalar functions and :
- Ground Truth Target: We define what the model should predict () using another set of functions and :
2. Optimization
The neural network takes the noisy sample and the time step to predict the target. We minimize the Mean Squared Error:
3. Framework Unification
By varying the coefficients and , this single loss function covers different popular models:
- Standard Diffusion (DDPM): Predicts the noise added to the data.
- Target:
- Flow Matching (Optimal Transport): Predicts the velocity along a straight line.
- Target:
- Data Prediction: The model directly estimates the clean sample from the noisy input.
- Target:
4. Classifier-Free Guidance (CFG)
To steer these models during sampling (e.g., using a text prompt ), we compute a weighted average of the conditional and unconditional predictions:
- Unconditional: (null or empty prompt).
- Guidance Scale : Higher values force the model to follow the condition more strictly, often at the cost of visual variety.
Summary: While GANs rely on a competitive game (minimax), Generalized Diffusion relies on direct regression. This makes training significantly more stable and allows for high-quality, diverse image and signal synthesis across various domains like photogrammetry and 3D reconstruction.
Perspectives on Diffusion Models
Diffusion models are multifaceted and can be understood through several mathematical and conceptual frameworks. Rather than just a single algorithm, they represent a class of models that bridge the gap between simple noise and complex data distributions.
1. Deep Latent Variable Perspective
In this view, diffusion is treated similarly to a Variational Autoencoder (VAE) but with a fixed encoder and a learned decoder.
- Forward Process: We define a fixed “noising” process where Gaussian noise is iteratively added to the data , moving through latent states .
- Backward Process: A neural network is trained to approximate the reverse step , effectively learning to “undo” the noise.
- Optimization: The model is trained by optimizing the variational lower bound (VLB), ensuring the generated samples represent the underlying data distribution.
2. Score-Based Perspective
Instead of modeling the probability density directly, we can model its gradient with respect to the input.
- Score Function: Defined as .
- Vector Field: The score function acts as a vector field that points toward areas of high probability density in the data space.
- Learning: The diffusion model learns a neural network to approximate this score function for . During sampling, we “walk” along this vector field to find high-density regions (real data).
3. Stochastic Differential Equations (SDEs)
For a more continuous mathematical treatment, the noising process can be described as an SDE.
- The Equation: We describe infinitesimal changes in data , time , and noise as:
- Reverse SDE: Diffusion models learn a neural network to solve the reverse version of this equation, allowing for continuous-time generation and more flexible sampling strategies.
Summary of Perspectives
Beyond the frameworks above, diffusion models can also be viewed as:
- Autoencoders: Specifically, denoising autoencoders applied at different noise levels.
- Recurrent Neural Networks: Since they apply the same network iteratively over time steps.
- Autoregressive Models: When viewed as predicting the “next” state in a sequence from noise to data.
- Expectation Estimators: Estimating the conditional mean of the data given the noisy observation.