teaching

DEEP GENERATIVE MODELS CS 496 FALL 2022

Top Calendar Slides Readings

Loctation

TBD

Class Day/Time

MON WED FRI 1pm Central time

Office Hours

By appointment

Instructor

Bryan Pardo

Course Description

Deep learning is a branch of machine learning based on algorithms that try to model high-level abstract representations of data by using multiple processing layers with complex structures. One of the most exciting areas of research in deep learning is that of generative models. Today’s generative models create text documents, write songs, make paintings and videos, and generate speech. This course is dedicated to understanding the inner workings of the technologies that underlie these advances. Students will learn about key methodologies, including Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Transformer-based language models. This is an advanced course that presumes a good working understanding of traditional supervised neural network technology and techniques (e.g. convolutional networks, LSTMs, loss functions, regularization, gradient descent).

Registration is by instructor permission only. This course is designed for doctoral students. Appropriately prepared BS and MS students may also be admitted, once doctoral student demand has been met.

Course Calendar

Back to top

Week Day and Date Topic Presenter Commentators Deliverable
1 Wed Sept 21 Course overview Pardo    
1 Fri Sept 23 Autoregressive language models & Embeddings Pardo   Paper bids due
2 Mon Sept 26 Attention & positional encoding Pardo    
2 Wed Sept 28 Transformers Pardo    
2 Fri Sept 30 Transformers: The Illustrated GPT-2 Pardo    
3 Mon Oct 3 Music Transformer Julia B. Keshav & Clarissa  
3 Wed Oct 5 Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention Tonmoay D. Shubhanshi & Caden & TC  
3 Fri Oct 7 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Mowafak A. Preetham & Keshav  
4 Mon Oct 10 GPT-3:Language Models are Few-Shot Learners Yujia & Ruth Isaiah & Julia  
4 Wed Oct 12 Quantifying Memorization Across Neural Language Models Isaiah J. Aleksandr & Julia  
4 Fri Oct 14 Image GPT Soroush S. Aneryben, Cameron & Amil  
5 Mon Oct 17 Final projects Pardo   10 reviews
5 Wed Oct 19 KL Divergence, Evidence Lower Bound (ELBO) Pardo    
5 Fri Oct 21 A starter blog on AutoEncoders and VAEs James W. Preetham & Srik & Mowafak  
6 Mon Oct 24 VQ-VAE: Neural Discrete Representation Learning Aneryben P. TC & Clarissa  
6 Wed Oct 26 Conditional VAE: Learning Structured Output Representation using Deep Conditional Generative Models Student    
6 Fri Oct 28 Beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework Srik G. James & Muhammed Project proposal
7 Mon Oct 31 DALL-E: Zero-Shot Text-to-Image Generation Liquian M. Jipeng & Mowafak  
7 Wed Nov 2 Jukebox: A Neural Net that Generates Music Preetham P. Aleksandr & James  
7 Fri Nov 4 Generative Adversarial Networks Pardo    
8 Mon Nov 7 DCGAN: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks Clarissa C. Aneryben, Conner & Omar  
8 Wed Nov 9 PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION TC L Yujia & Ruth  
8 Fri Nov 11 StyleGAN: A Style-Based Generator Architecture for Generative Adversarial Networks Conner & Omar Shubhanshi & Caden & Soroush  
9 Mon Nov 14 Cross-Modal Contrastive Learning for Text-to-Image Generation Shraddha??? Amil & Cameron Project progress report
9 Wed Nov 16 An Introduction to Diffusion Models Shubhanshi & Caden Srik, Conner & Omar  
9 Fri Nov 18 Learning Transferable Visual Models From Natural Language Supervision Cameron & Amil Yujia & Ruth  
10 Mon Nov 21 Guidance: a cheat code for diffusion models Muhammed K. Liqian & Isaiah  
10 Wed Nov 23 THANSKGIVING: No class      
10 Fri Nov 25 THANKSGIVING: No class      
11 Mon Nov 28 GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models Jipeng S. Liqian & Tonmoay 10 reviews
11 Wed Nov 30 Google’s Imagen Aleksandr S. Jipeng & Soroush  
11 Fri Dec 2 DiffWave: A Versatile Diffusion Model for Audio Synthesis Keshav B. Tonmoay & Muhammed  
12 Thu Dec 8 Final project presentations 9AM-11AM Student   presentation + website

Course assignments

Reading: 40 points

You will submit 20 reviews of readings from the course website. Each will be a single-page reaction to something you read from the links provided below.

Class Paper Presentation: 25 points

Once during the course of the term, you will be the lead person discussing the reading in class. This will mean you haven’t just read the paper, but you’ve read related work, really understand it and can give a 30-minute presentation of the paper (including slides) and then lead a discussion about it.

Class Participation: 10 points

For two presentation OTHER than your own, you’ll be expected to be 100% on top of the material and be the counterpoint to the presenter’s point. I’ll expect you to be making good points and display clear knowledge of the material.

Project in generative modeling: 25 points

You will make, modify, and or analyze some work, project or subdomain in generative modeling. This may mean modifying MusicVAE or making a story generator on top of GPT-3. It may mean downloading an existing thing and experimenting with it or it may mean building a new thing. Duplicating a paper’s results is always a great project. It could be a deep-dive literature review on a subtopic (a good first step towards writing a paper)… or something else, subject to approval of the proposal. The point breakdown for the project is as follows. There will be a maximum of 10 projects in the class. Students are encouraged to pair up.

THis will be a group project. You will be in groups or 2 or 3. There will be no single projects.

Lecture Slides and Notebooks

Back to top

Lectures

Course Reading

Back to top

A BIG OVERVIEW OF GENERATIVE MODELING

  1. Deep Generative Modelling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models

AUTOREGRESSIVE MODELS

  1. Pixel Recurrent Networks: A highly infulential autoregressive model for image generation

  2. WaveNet: A Generative Model for Raw Audio: A highly infulential autoregressive model for audio generation

TRANSFORMERS

Elements that lead up to Transformers

  1. The Illustrated Word2Vec: Transformers for text take word embeddings as input. So what’s a word embedding? This is a walk through of one of the most famous embeddings

  2. Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) This is a good starting point blog on attention models, which is what Transformers are built on.

  3. Sequence to Sequence Learning with Neural Networks: This is the paper that the link above was trying to explain.

  4. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation: This introduces encoder-decoder networks for translation. Attention models were first built on this framework.

  5. Neural Machine Translation by Jointly Learning to Align and Translate: This paper introduces additive attention to an encoder-decoder.

  6. Effective Approaches to Attention-based Neural Machine Translation: This paper introduces multiplicative attention, which is what Transformers use.

  7. Deep Residual Learning for Image Recognition: This introduces the idea of “residual layers”, which are layers that are skippable. This idea is used in Transformers.

The Transformer Architecture

  1. The Illustrated Transformer: A good initial walkthrough that helps a lot with understanding transformers ** I’d start with this one to learn about transformers.**

  2. The Annotated Transformer: An annotated walk-through of the “Attention is All You Need” paper, complete with detailed python implementation of a transformer.

  3. Attention is All You Need: The paper that introduced transformers, which are a popular and more complicated kind of attention network.

Advanced Transformers

  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding: A widely-used language model based on Transformer encoder blocks.

  2. The Illustrated GPT-2: A good overview of GPT-2 and its relation to Transformer decoder blocks.

  3. GPT-3:Language Models are Few-Shot Learners

  4. Image GPT: Using a Transformer to make images. This isn’t DALL-E, even though it’s by OpenAI.

  5. Learning Transferable Visual Models From Natural Language Supervision: This describes how DALL-E selects which of the many images it generates should be shown to the user.

  6. Perceiver: General Perception with Iterative Attention: a model that builds upon Transformers that scales to many more inputs. Not exactly about generation

  7. Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention: This paper talks about how to take the O(n^2) cost of attention and make it O(n)

  8. Relative Positional Encoding for Transformers with Linear Complexity: Positional encoding is messed up by the linear attention attention approach from “Transformers are RNNs”. This paper addresses that problem.

  9. Zero-Shot Text-to-Image Generation: This is the original version of DALL-E, which generates images conditioned on text captions. It is based on Transformer architecture.

  10. Music Transformer: Applying Transformers to music composition.

Critiques of Transformers & Large Language Models

  1. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?: This is the paper that Timnit Gebru and Margaret Mitchell got fired for publishing.

  2. Alignment of Language Agents: This is Deep Mind’s critique of their own approach.

  3. Extracting Training Data from Large Language Models: Did GPT-2 memorize a Harry Potter book? Read this and find out.

  4. Quantifying Memorization Across Neural Language Models: Systematic experiments on how model size, prompt length, and frequency of an example in the training set impact our ability to extract memorized content.

GENERATIVE ADVERSARIAL NETWORKS (GANS)

Creating adversarial examples

  1. Explaining and Harnessing Adversarial Examples : This paper got the ball rolling by pointing out how to make images that look good but are consistently misclassified by trained deepnets.

Creating GANs

  1. Generative Adversarial Nets: The paper that introduced GANs

  2. 2016 Tutorial on Generative Adversarial Networks by one of the creators of the GAN. This one’s long, but good.

  3. DCGAN: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks: This is an end-to-end model. Many papers build on this.

Advanced GANS

  1. PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION This is used in StyleGAN and was state-of-the-art in 2018.

  2. StyleGAN: A Style-Based Generator Architecture for Generative Adversarial Networks: As of 2019, this was the current state-of-the-art for GAN-based image generation.

  3. StyleGAN2-ADA: Training Generative Adversarial Networks with Limited Data: : As of 2020, this was the current state-of-the-art for GAN-based image generation.

  4. Cross-Modal Contrastive Learning for Text-to-Image Generation As of 2021, this was the best GAN for text-conditioned image generation. Note it’s use of contrastive loss. You’ll see that again in CLIP.

  5. Adversarial Audio Synthesis: introduces WaveGAN, a paper about applying GANs to unsupervised synthesis of raw-waveform audio.

  6. MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis: Doing speech synthesis with GANs.

  7. HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis: Even better speech synthesis with GANs.

Variational Auto Encoders (VAEs)

Background needed for Variational Autoencoders

  1. The Deep Learning Book’s Chapters on Probability and Linear Algebra. Read these before the Easy Intro to KL Divergence

  2. An Easy Introduction to Kullback-Leibler (KL) Divergence . Read this before reading about ELBO

  3. Jenson’s Inequality (an example with code). Read this before reading about ELBO

  4. Evidence Lower Bound, Clearly Explained: A video walking through Evidence Lower Bound.

  5. A walkthrough of Evidence Lower Bound (ELBO): Lecture notes from David Blei, one of the inventors of ELBO. ELBO is what you optimize when you do variational inference in a VAE.

  6. Categorical Reparameterization with Gumbel-Softmax: This is a way of allowing categorical latent variables in your model so you can run a differentiable gradient descent algorithm through them. This is used in Vector-Quantized VAEs.

  7. Probabilistic Graphical Models: Lecture notes from the class taught at Stanford.

Autoencoders

  1. A starter blog on AutoEncoders and VAEs: Probably a good place to start.

  2. The Deep Learning Book’s Chapter on Autoencoders

  3. From neural PCA to deep unsupervised learning : This paper introduces Ladder networks, which will come back when we get to VAEs

BASIC Variational Auto Encoders (VAEs)

  1. Tutorial on Variational Autoencoders: This is a walk-through of the math of VAEs.

  2. Variational Inference, a Review for Statisticians: This explains the math behind variational inference and why variational inference instead of Gibbs sampling.

ADVANCED VAEs

  1. VQ-VAE: Neural Discrete Representation Learning

  2. Conditional VAE: Learning Structured Output Representation using Deep Conditional Generative Models: Making a controllable VAE through conditioning

  3. Beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework : This is about making disentangled representations: making the VAEs latent variables meaningful to us.

  4. Isolating Sources of Disentanglement in VAEs: More on disentangled representations in VAEs

  5. Ladder VAEs: Hierarchical VAEs

  6. Adversarial Auto-Encoders: You can guess what this is.

  7. A Wizard’s Guide to Adversarial Autoencoders: This is a multi-part tutorial that will be helpfup for understanding AAEs.

  8. From Autoencoder to Beta-VAE: Lilian Weng’s overview of most kinds of autoencoders

VAE Applications

  1. MUSIC VAE: Learning Latent Representations of Music to Generate Interactive Musical Palettes: Making controllable music composition with VAEs

  2. Jukebox: A Neural Net that Generates Music….with a combination of autoencoders and GANs

  3. Accelerated antimicrobial discovery via deep generative models and molecular dynamics simulations: Using a WAE to generate new drugs

Diffusion and Score Models

  1. Deep unsupervised learning using nonequilibrium thermodynamics: The 2015 paper where diffusion models were introduced.

  2. Denoising Diffusion Probabilistic Models: This was a break-out paper from 2020 that got people excited about diffusion models.

  3. Generative Modeling by Estimating Gradients of the Data Distribution: This is a blog that explains how score-based models are also basically diffusion models.

  4. What are Diffusion Models?

  5. An Introduction to Diffusion Models: A nice tutorial blog that has Pytorch code.

Advanced Difussion and Score Models

  1. Guidance: a cheat code for diffusion models: if you want to understand DALL-E-2 and Imagen, you need to understand this.

  2. DiffWave: A Versatile Diffusion Model for Audio Synthesis: A neural vocoder done with diffusion from 2021

  3. Universal Speech Enhancement With Score-based Diffusion

  4. Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise: Do we need to add noise at each step or would any transform do?

Imagen
  1. High Fidelity Image Generation Using Diffusion Models: A Google Blog that gives the chain of development that led to Imagen.

  2. Google’s Imagen: This is the Pepsi to DALL-E-2’s Coke.

DALL-E-2
  1. GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models: This is the diffusion model used in DALL-E-2

  2. Learning Transferable Visual Models From Natural Language Supervision: The CLIP representation. This is used in DALL-E-2.

  3. Hierarchical Text-Conditional Image Generation with CLIP Latents: The DALL-E-2 paper.

TOPICS NOT COVERD IN CLASS (BUT THAT ARE WORTH LEARNING ABOUT)

Normalizing Flows

  1. Variational Inference with Normalizing Flows: A differentiable method to take a simple distribution and make it arbitrarily complex. Useful for modeling distributions in deep nets. Can be added to VAEs. These are being replaced by diffusion models.

FILM Layers

  1. FiLM: Visual Reasoning with a General Conditioning Layer: Affine transformation of input layers that proves helpful in many contextx. Here’s the TL;DR version. I’d start with the TL;DR.

Structured State Space Models

  1. Efficiently Modeling Long Sequences with Structured State Spaces

  2. The Annotated S4: This is a guided walk through (with code) of a structured state space model.

  3. It’s Raw! Audio Generation with State-Space Models