MON 3pm - 5pm Central time
TUE 3pm - 5pm Central time
Bryan Pardo Office hours by appointment
One of the most exciting areas of research in deep learning is that of generative models. Today’s generative models create text documents, write songs, make paintings and videos, and generate speech. This course is dedicated to understanding the inner workings of the technologies that underlie these advances. Students will learn about key methodologies, including Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Transformer-based language models. This is an advanced course that presumes a good working understanding of traditional supervised neural network technology and techniques (e.g. convolutional networks, LSTMs, loss functions, regularization, gradient descent).
The prerequisite is CS 449 Deep Learning.
|Week||Day and Date||Topic||Presenter||Commentators|
|2||Mon Jan 8||Autoregressive language models||Pardo|
|2||Tue Jan 9||Transformers: The Illustrated Transformer||Pardo|
|Embeddings: The Illustrated Word2Vec||Pardo|
|3||Mon Jan 15||Positional Encoding||Pardo|
|BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding||Pardo/?|
|3||Tue Jan 16||The Curious Case of Neural Text Degeneration||Pardo/?|
|Quantifying Memorization Across Neural Language Models||?|
|4||Mon Jan 22||Pam Samuelson’s ‘AI Meets Copyright’ Lecture||Pardo|
|Discussion of AI, copying, copyright||Pardo|
|4||Tue Jan 23||Reinforcement Learning||Pardo|
|5||Mon Jan 29||NO CLASS||NO CLASS|
|5||Tue Jan 30||NO CLASS||NO CLASS|
|6||Mon Feb 5||Deep Reinforcement Learning: Pong from Pixels||Pardo/?|
|Deep reinforcement learning from human preferences||Pardo/?|
|6||Tue Feb 6||Training language models to follow instructions with human feedback||Pardo/?|
|Reformer: The Efficient Transformer||?|
|7||Mon Feb 12||Variational Auto Encoders (VAEs)||Pardo|
|Variational Auto Encoders||Pardo|
|7||Tue Feb 13||Zero-Shot Text-to-Image Generation||?|
|VQ-VAE: Neural Discrete Representation Learning||?|
|8||Mon Feb 19||MusicLM: Generating Music From Text||?|
|Vampnet: Music Generation via Masked Acoustic Token Modeling||Pardo/?|
|8||Tue Feb 20||GANs||Pardo|
|PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION||?|
|9||Mon Feb 26||StyleGAN: A Style-Based Generator Architecture for Generative Adversarial Networks||?|
|9||Tue Feb 27||Score models||Pardo|
|Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models||?|
|10||Mon March 4||Learning Transferable Visual Models From Natural Language Supervision||?|
|Guidance: a cheat code for diffusion models||?|
|10||Tue March 5||GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models||?|
|Hierarchical Text-Conditional Image Generation with CLIP Latents||?|
You will submit 20 reviews of readings from the course website. 10 of these must be papers (not lecture slides, actual papers) scheduled for presenation in the course calendar. 10 of these can be chosen from the full set of papers for the course. NOTE: As part of evaluating your reading, we will be meeting with you to discuss your thoughts.
Once during the course of the term, you (and your partner) will be the lead in discussing the reading in class. This will mean you haven’t just read the paper, but you’ve read related work, really understand it and can give a 30-minute presentation of the paper (including slides) and then lead a discussion about it.
For two presentation OTHER than your own, you’ll be expected to be 100% on top of the material and be the counterpoint to the presenter’s point. I’ll expect you to be making good points and display clear knowledge of the material.
You will make, modify, and or analyze some work, project or subdomain in generative modeling. This may mean modifying MusicVAE or making a story generator on top of GPT-3. It may mean downloading an existing thing and experimenting with it or it may mean building a new thing. Duplicating a paper’s results is always a great project. It could be a deep-dive literature review on a subtopic (a good first step towards writing a paper)… or something else, subject to approval of the proposal. The point breakdown for the project is as follows. There will be a maximum of 10 projects in the class. Students are encouraged to pair up.
THis will be a group project. You will be in groups or 2 or 3. There will be no single projects.
Pixel Recurrent Networks: A highly infulential autoregressive model for image generation
WaveNet: A Generative Model for Raw Audio: A highly infulential autoregressive model for audio generation
Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) This is a good starting point blog on attention models, which is what Transformers are built on.
Sequence to Sequence Learning with Neural Networks: This is the paper that the link above was trying to explain.
Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation: This introduces encoder-decoder networks for translation. Attention models were first built on this framework.
Neural Machine Translation by Jointly Learning to Align and Translate: This paper introduces additive attention to an encoder-decoder.
Effective Approaches to Attention-based Neural Machine Translation: This paper introduces multiplicative attention, which is what Transformers use.
Deep Residual Learning for Image Recognition: This introduces the idea of “residual layers”, which are layers that are skippable. This idea is used in Transformers.
The Illustrated Word2Vec: Transformers for text take word embeddings as input. So what’s a word embedding? This is a walk through word embeddings, at a high level, with no math.
Efficient Estimation of Word Representations in Vector Space: This is the Word2Vec paper.
GloVe: Global Vectors for Word Representation: The paper that describes the Glove embedding, which is an improvement on Word2Vec, and has downloadable embeddings to try. There is math here.
Using the Output Embedding to Improve Language Models: In transformers, they actually learn their embeddings at the same time as everything else and tie the input embedding to the output embedding. This paper explains why.
The Illustrated Transformer: A good initial walkthrough that helps a lot with understanding transformers ** I’d start with this one to learn about transformers.**
The Annotated Transformer: An annotated walk-through of the “Attention is All You Need” paper, complete with detailed python implementation of a transformer. ** If you actually want to understand transformer implementation, you should read this in depth…and play with the code.**
Attention is All You Need: The paper that introduced transformers, which are a popular and more complicated kind of attention network.
Self-Attention with Relative Position Representations: The most frequently used alternative to absolute positional encoding
Rotary Positional Encoding: claims to combine the benefits of both absolute and relative positional encoding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding: A widely-used language model based on Transformer encoder blocks.
The Illustrated GPT-2: A good overview of GPT-2 and its relation to Transformer decoder blocks.
The Curious Case of Neural Text Degeneration: When you sample from the output of a language model, it matters a LOT just how you sample. Read this to understand why.
GPT-3:Language Models are Few-Shot Learners: This explores the range of things that you can do with a GPT model.
Training language models to follow instructions with human feedback: This combines RL with a GPT model to make InstructGPT, the precursor to ChatGPT
Learning Transferable Visual Models From Natural Language Supervision: This describes how DALL-E selects which of the many images it generates should be shown to the user.
Image GPT: Using a Transformer to make images. This isn’t DALL-E, even though it’s by OpenAI.
Self-attention with relative position representations: This is what got relative positional encoding started.
Reformer: The Efficient Transformer: This uses locality sensitive hashing to make attention much more efficient, moving it from taking O(n^2) and making it O(nlogn). This is a better paper to read than the “Transformers are RNNs” paper (below), in that it is much clearer with its math and ideas.
Zero-Shot Text-to-Image Generation: This is the original version of DALL-E, which generates images conditioned on text captions. It is based on Transformer architecture.
Music Transformer: Applying Transformers to music composition.
WAV2VEC: UNSUPERVISED PRE-TRAINING FOR SPEECH RECOGNITION: This describes a way to build a dictionary of audio tokens that is used in MusicLM
Wav2vec 2.0: Learning the structure of speech from raw audio: The 2nd iteration of wav2vec
SoundStream: An End-to-End Neural Audio Codec: Perhaps the top (as of 2023) audio codec. It is used in multiple audio langague models to tokenize the audio for the language model.
W2v-BERT: Combining Contrastive Learning and Masked Language Modeling for Self-Supervised Speech Pre-Training: Masked inference language model method to learn audio tokens. This is what is actually used in MusicLM
AudioLM: A Language Modeling Approach to Audio Generation: A language model for generating speech continuation
MusicLM: Generating Music From Text: A model generating music audio from text descriptions such as “a calming violin melody backed by a distorted guitar riff”.
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?: This is the paper that Timnit Gebru and Margaret Mitchell got fired from Google’s Ethical AI team for publishing.
Alignment of Language Agents: This is Deep Mind’s critique of their own approach.
Extracting Training Data from Large Language Models: Did GPT-2 memorize a Harry Potter book? Read this and find out.
Quantifying Memorization Across Neural Language Models: Systematic experiments on how model size, prompt length, and frequency of an example in the training set impact our ability to extract memorized content.
Open AI’s analysis of GPT-4 potential harms: Worth a serious read
Reinforcement Learning: An Introduction: This is an entire book, but it is the one I learned RL from.
Policy Gradient Methods: Tutorial and New Frontiers: This is a video lecture that explains reinforcement learning policy grading methods. This is underlying tech used for training ChatGPT. Yes, this video is worth a “reading” credit. Yes, I started it 37 minutes into the lecture on purpose. You don’t have to watch the first half of the lecture.
Andrew K.s blog on Deep Reinforcement Learning: When combined with the video tutorial above, you’ll more-or-less understand policy gradient methods for deep reinforcement learning
Proximal Policy Optimization Algorithms: The paper that (mostly) explains the RL approach used in InstructGPT (the precursor to ChatGPT)
This blog on RL from human feedback: read the paper linked at the start of the blog. It teaches how to learn a reward function from human feedback, so you can do RL.
This blog on Aligning language models to follow instructions together explain how ChatGPT is fine-tuned to do prompt answering by combining proximal policy optimization and RL from human feedback (the two previous papers on this list).
Generative Adversarial Nets: The paper that introduced GANs
2016 Tutorial on Generative Adversarial Networks by one of the creators of the GAN. This one’s long, but good.
DCGAN: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks: This is an end-to-end model. Many papers build on this.
PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION This is used in StyleGAN and was state-of-the-art in 2018.
StyleGAN: A Style-Based Generator Architecture for Generative Adversarial Networks: As of 2019, this was the current state-of-the-art for GAN-based image generation.
StyleGAN2-ADA: Training Generative Adversarial Networks with Limited Data: : As of 2020, this was the current state-of-the-art for GAN-based image generation.
Cross-Modal Contrastive Learning for Text-to-Image Generation As of 2021, this was the best GAN for text-conditioned image generation. Note it’s use of contrastive loss. You’ll see that again in CLIP.
Adversarial Audio Synthesis: introduces WaveGAN, a paper about applying GANs to unsupervised synthesis of raw-waveform audio.
MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis: Doing speech synthesis with GANs.
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis: Even better speech synthesis with GANs.
The Deep Learning Book’s Chapters on Probability and Linear Algebra. Read these before the Easy Intro to KL Divergence
An Easy Introduction to Kullback-Leibler (KL) Divergence . Read this before reading about ELBO
Jenson’s Inequality (an example with code). Read this before reading about ELBO
Evidence Lower Bound, Clearly Explained: A video walking through Evidence Lower Bound.
A walkthrough of Evidence Lower Bound (ELBO): Lecture notes from David Blei, one of the inventors of ELBO. ELBO is what you optimize when you do variational inference in a VAE.
Categorical Reparameterization with Gumbel-Softmax: This is a way of allowing categorical latent variables in your model so you can run a differentiable gradient descent algorithm through them. This is used in Vector-Quantized VAEs.
Probabilistic Graphical Models: Lecture notes from the class taught at Stanford.
A starter blog on AutoEncoders and VAEs: Probably a good place to start.
From neural PCA to deep unsupervised learning : This paper introduces Ladder networks, which will come back when we get to VAEs
Tutorial on Variational Autoencoders: This is a walk-through of the math of VAEs. I think you should maybe start with this one.
Variational Inference, a Review for Statisticians: This explains the math behind variational inference. One of the authors is an inventor of variational inference.
An introduction to variational autoencoders: This is by the inventors of the VAE.
Tutorial: Deriving the Standard Variational Autoencoder (VAE) Loss Function: This is the only paper I’ve found that walks you through all the details to derive the actual loss function.
Conditional VAE: Learning Structured Output Representation using Deep Conditional Generative Models: Making a controllable VAE through conditioning
Beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework : This is about making disentangled representations: making the VAEs latent variables meaningful to us.
Isolating Sources of Disentanglement in VAEs: More on disentangled representations in VAEs
Ladder VAEs: Hierarchical VAEs
Adversarial Auto-Encoders: You can guess what this is.
A Wizard’s Guide to Adversarial Autoencoders: This is a multi-part tutorial that will be helpfup for understanding AAEs.
From Autoencoder to Beta-VAE: Lilian Weng’s overview of most kinds of autoencoders
MUSIC VAE: Learning Latent Representations of Music to Generate Interactive Musical Palettes: Making controllable music composition with VAEs
Jukebox: A Neural Net that Generates Music….with a combination of autoencoders and GANs
Accelerated antimicrobial discovery via deep generative models and molecular dynamics simulations: Using a WAE to generate new drugs
Deep unsupervised learning using nonequilibrium thermodynamics: The 2015 paper where diffusion models were introduced.
Denoising Diffusion Probabilistic Models: This was a break-out paper from 2020 that got people excited about diffusion models.
Generative Modeling by Estimating Gradients of the Data Distribution: This is a blog that explains how score-based models are also basically diffusion models.
An Introduction to Diffusion Models: A nice tutorial blog that has Pytorch code.
Guidance: a cheat code for diffusion models: if you want to understand DALL-E-2 and Imagen, you need to understand this.
DiffWave: A Versatile Diffusion Model for Audio Synthesis: A neural vocoder done with diffusion from 2021
Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise: Do we need to add noise at each step or would any transform do?
High Fidelity Image Generation Using Diffusion Models: A Google Blog that gives the chain of development that led to Imagen.
Google’s Imagen: This is the Pepsi to DALL-E-2’s Coke.
Diffusion Models Beat GANs on Image Synthesis: This paper describes many technical details used in the GLIDE paper…and therefore in DALL-E-2
GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models: This is paper that lays the groundwork for DALL-E-2.
Learning Transferable Visual Models From Natural Language Supervision: The CLIP representation. This is used in DALL-E-2.
Hierarchical Text-Conditional Image Generation with CLIP Latents: The DALL-E-2 paper.
Extracting Training Data from Diffusion Models: Exactly what it sounds like.
Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models: Just what it sounds like.
The Annotated S4: This is a guided walk through (with code) of a structured state space model.
Competition-level code generation with AlphaCode: This beats 1/2 of all human entrants into a coding competition.
ChatGPT: Already perhaps the most famous chatbot and most famous language model and it has been out about 2 weeks as of this writing.
Riffusion: Repurposes StableDiffusion to generate spectrograms. Cool opensource project. They should have published this, too.
Toolformer: Language Models Can Teach Themselves to Use Tools: Meta researchers claim a transformer can learn to use APIs.
Pam Samuelson’s AI Meets Copyright. This is a video lecture on generative AI and copyright law from one of top copyright scholars in the USA.
Consistency Trajectory Models. For single-step diffusion model sampling, our new model, Consistency Trajectory Model, achieves SOTA on CIFAR-10 (FID 1.73) and ImageNet 64x64 (FID 1.92). CTM offers diverse sampling options and balances computational budget with sample fidelity effectively.