GAN Loss Functions (All in One)

min G max D - in a nutshell

Interpretation: D(·) outputs a probability (between 0 and 1) of being real.

1. Original GAN (Minimax)

$$ \min_G \max_D V(D, G) = \mathbb{E}_{x \sim p_{\text{data}}}[\log D(x)] + \mathbb{E}_{z \sim p_z}[\log(1 - D(G(z)))] $$

Step 1: Update Discriminator

$$ \max_D \mathbb{E}_{x}[\log D(x)] + \mathbb{E}_{z}[\log(1 - D(G(z)))] $$

$$ \mathcal{L}_D = - \mathbb{E}_{x}[\log D(x)] - \mathbb{E}_{z}[\log(1 - D(G(z)))] $$

Step 2: Update Generator

(a) Minimax:

$$ \mathcal{L}_G = \mathbb{E}_{z}[\log(1 - D(G(z)))] $$

(b) Non-saturating:

$$ \mathcal{L}_G = - \mathbb{E}_{z}[\log D(G(z))] $$


PyTorch Example


criterion = nn.BCELoss()

for epoch in range(num_epochs):
    for batch_idx, (real, _) in enumerate(loader):
        real = real.view(-1, 784).to(device)
        batch_size = real.shape[0]
        
        noise = torch.randn(batch_size, z_dim).to(device)
        fake = gen(noise)

        disc_real = disc(real).view(-1)
        lossD_real = criterion(disc_real, torch.ones_like(disc_real))

        disc_fake = disc(fake).view(-1)
        lossD_fake = criterion(disc_fake, torch.zeros_like(disc_fake))

        lossD = (lossD_real + lossD_fake) / 2

        disc.zero_grad()
        lossD.backward(retain_graph=True)
        opt_disc.step()

        output = disc(fake).view(-1)
        lossG = criterion(output, torch.ones_like(output))

        gen.zero_grad()
        lossG.backward()
        opt_gen.step()

5. Least Squares GAN (LSGAN)

$$ \mathcal{L}_D = \frac{1}{2} \mathbb{E}[(D(x) - 1)^2] + \frac{1}{2} \mathbb{E}[(D(G(z)))^2] $$

$$ \mathcal{L}_G = \frac{1}{2} \mathbb{E}[(D(G(z)) - 1)^2] $$


6. Wasserstein GAN (WGAN)

$$ \mathcal{L}_D = - \mathbb{E}[D(x)] + \mathbb{E}[D(G(z))] $$

$$ \mathcal{L}_G = - \mathbb{E}[D(G(z))] $$


7. WGAN-GP

$$ \mathcal{L}_D = - \mathbb{E}[D(x)] + \mathbb{E}[D(G(z))] + \lambda \mathbb{E}_{\hat{x}} [(||\nabla_{\hat{x}} D(\hat{x})||_2 - 1)^2] $$

$$ \mathcal{L}_G = - \mathbb{E}[D(G(z))] $$


Pix2Pix GAN

Conditional GAN: image → image

Loss

$$ \mathcal{L}_{GAN}(G, D) = \mathbb{E}_{x,y}[\log D(x,y)] + \mathbb{E}_{x}[\log(1 - D(x, G(x)))] $$

$$ \mathcal{L}_{L1} = \mathbb{E}_{x,y}[|y - G(x)|] $$

$$ G^* = \arg \min_G \max_D \mathcal{L}_{GAN} + \lambda \mathcal{L}_{L1} $$

Summary

Intuition: Generate a realistic image that matches the input.