Top | Calendar | Slides | Readings |
Tech M152
WED 5pm - 7pm Central time
FRI 5pm - 7pm Central time
Bryan Pardo Office hours by appointment
Patrick O’Reilly 2-4 PM Saturdays on Zoom (see campuswire for zoom)
Deep learning is a branch of machine learning based on algorithms that try to model high-level abstract representations of data by using multiple processing layers with complex structures. One of the most exciting areas of research in deep learning is that of generative models. Today’s generative models create text documents, write songs, make paintings and videos, and generate speech. This course is dedicated to understanding the inner workings of the technologies that underlie these advances. Students will learn about key methodologies, including Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Transformer-based language models. This is an advanced course that presumes a good working understanding of traditional supervised neural network technology and techniques (e.g. convolutional networks, LSTMs, loss functions, regularization, gradient descent).
The prerequisite is CS 449 Deep Learning.
Week | Day and Date | Topic | Presenter | Commentators |
---|---|---|---|---|
1 | Wed March 28 | Course overview | Pardo | |
Autoregressive language models | Pardo | |||
1 | Fri March 31 | Attention | Pardo | |
Transformers: The Illustrated Transformer | Pardo | |||
2 | Wed April 5 | NO CLASS | NO CLASS | |
NO CLASS | ||||
2 | Fri April 7 | Embeddings: The Illustrated Word2Vec | Pardo | |
Positional Encoding | Pardo | |||
3 | Wed April 12 | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | Shruthi, Shikher | Ruichun, Aanchal, Gaurav |
The Curious Case of Neural Text Degeneration | Pardo | Zhuoyang, ShaoBo, Marco | ||
3 | Fri April 14 | Quantifying Memorization Across Neural Language Models | Kartikeya, Simon | Milind, Ruichun, Zhuoyang |
Discussion about language models and ethics | ||||
4 | Wed April 19 | Reformer: The Efficient Transformer | Rui-chun, Zhouyang | Simon, Gaurav, Shikher |
Reinforcement Learning | ||||
4 | Fri April 21 | More reinforcement learning | Pardo | |
Deep Reinforcement Learning: Pong from Pixels | Pardo | Tony, Rohin, Gautam | ||
5 | Wed April 26 | Deep reinforcement learning from human preferences | Pardo | Tony, Surya |
Training language models to follow instructions with human feedback | Pardo | |||
5 | Fri April 28 | NO CLASS | NO CLASS | |
NO CLASS | ||||
6 | Wed May 3 | Final Projects | Pardo | |
Variational Auto Encoders | Pardo | |||
6 | Fri May 5 | Zero-Shot Text-to-Image Generation | Uzair, Vishal | ShaoBo, Marco, Dev |
VQ-VAE: Neural Discrete Representation Learning | Gautam | Venky, Milind, Uzair | ||
7 | Wed May 10 | MusicLM: Generating Music From Text | Bin | Shruthi, Venky |
Jukebox: A Neural Net that Generates Music | David, Rohin | Xiaopeng | ||
7 | Fri May 12 | GANs | Pardo | |
DCGAN: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Aanchal, Venky | Vishal, David, Andy | ||
8 | Wed May 17 | PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION | Xiaopeng | David |
StyleGAN: A Style-Based Generator Architecture for Generative Adversarial Networks | Milind, Gaurav | Dev, Mingfu, Vishal | ||
8 | Fri May 19 | Diffusion models | Pardo | |
Score models | Pardo | |||
9 | Wed May 24 | Learning Transferable Visual Models From Natural Language Supervision | Tony, Surya | Mingfu, Xiaopeng, Gautam |
Guidance: a cheat code for diffusion models | Mingfu | Andy, Kartikeya, Shikher | ||
9 | Fri May 26 | Final project meetings | Pardo | |
Pardo | ||||
10 | Wed May 31 | GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models | Shaobo, Marco | Kartikeya, Surya, Simon |
Hierarchical Text-Conditional Image Generation with CLIP Latents | Dev | Bin, Andy, Shruthi | ||
10 | Fri June 2 | Google’s Imagen | Pardo | Marco, Rohin, ShaoBo |
Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models | Andy | Bin, Aanchal, Zhuoyang | ||
10 | Wed June 7 | Final Project Presentations from 5-7pm, at our normal class time | Pardo | |
You will do a guided exploration of generative modeling technology.
You will submit 15 reviews of readings from the course website. 10 of these must be papers (not lecture slides, actual papers) scheduled for presenation in the course calendar. 5 of these can be chosen from the full set of papers for the course. NOTE: As part of evaluating your reading, we will be meeting with you to discuss your thoughts.
Once during the course of the term, you (and your partner) will be the lead in discussing the reading in class. This will mean you haven’t just read the paper, but you’ve read related work, really understand it and can give a 30-minute presentation of the paper (including slides) and then lead a discussion about it.
For two presentation OTHER than your own, you’ll be expected to be 100% on top of the material and be the counterpoint to the presenter’s point. I’ll expect you to be making good points and display clear knowledge of the material.
You will make, modify, and or analyze some work, project or subdomain in generative modeling. This may mean modifying MusicVAE or making a story generator on top of GPT-3. It may mean downloading an existing thing and experimenting with it or it may mean building a new thing. Duplicating a paper’s results is always a great project. It could be a deep-dive literature review on a subtopic (a good first step towards writing a paper)… or something else, subject to approval of the proposal. The point breakdown for the project is as follows. There will be a maximum of 10 projects in the class. Students are encouraged to pair up.
THis will be a group project. You will be in groups or 2 or 3. There will be no single projects.
Pixel Recurrent Networks: A highly infulential autoregressive model for image generation
WaveNet: A Generative Model for Raw Audio: A highly infulential autoregressive model for audio generation
Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) This is a good starting point blog on attention models, which is what Transformers are built on.
Sequence to Sequence Learning with Neural Networks: This is the paper that the link above was trying to explain.
Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation: This introduces encoder-decoder networks for translation. Attention models were first built on this framework.
Neural Machine Translation by Jointly Learning to Align and Translate: This paper introduces additive attention to an encoder-decoder.
Effective Approaches to Attention-based Neural Machine Translation: This paper introduces multiplicative attention, which is what Transformers use.
Deep Residual Learning for Image Recognition: This introduces the idea of “residual layers”, which are layers that are skippable. This idea is used in Transformers.
The Illustrated Word2Vec: Transformers for text take word embeddings as input. So what’s a word embedding? This is a walk through word embeddings, at a high level, with no math.
Efficient Estimation of Word Representations in Vector Space: This is the Word2Vec paper.
GloVe: Global Vectors for Word Representation: The paper that describes the Glove embedding, which is an improvement on Word2Vec, and has downloadable embeddings to try. There is math here.
Using the Output Embedding to Improve Language Models: In transformers, they actually learn their embeddings at the same time as everything else and tie the input embedding to the output embedding. This paper explains why.
The Illustrated Transformer: A good initial walkthrough that helps a lot with understanding transformers ** I’d start with this one to learn about transformers.**
The Annotated Transformer: An annotated walk-through of the “Attention is All You Need” paper, complete with detailed python implementation of a transformer. ** If you actually want to understand transformer implementation, you should read this in depth…and play with the code.**
Attention is All You Need: The paper that introduced transformers, which are a popular and more complicated kind of attention network.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding: A widely-used language model based on Transformer encoder blocks.
The Illustrated GPT-2: A good overview of GPT-2 and its relation to Transformer decoder blocks.
The Curious Case of Neural Text Degeneration: When you sample from the output of a language model, it matters a LOT just how you sample. Read this to understand why.
GPT-3:Language Models are Few-Shot Learners: This explores the range of things that you can do with a GPT model.
Training language models to follow instructions with human feedback: This combines RL with a GPT model to make InstructGPT, the precursor to ChatGPT
Learning Transferable Visual Models From Natural Language Supervision: This describes how DALL-E selects which of the many images it generates should be shown to the user.
Image GPT: Using a Transformer to make images. This isn’t DALL-E, even though it’s by OpenAI.
Self-attention with relative position representations: This is what got relative positional encoding started.
Reformer: The Efficient Transformer: This uses locality sensitive hashing to make attention much more efficient, moving it from taking O(n^2) and making it O(nlogn). This is a better paper to read than the “Transformers are RNNs” paper (below), in that it is much clearer with its math and ideas.
Zero-Shot Text-to-Image Generation: This is the original version of DALL-E, which generates images conditioned on text captions. It is based on Transformer architecture.
Music Transformer: Applying Transformers to music composition.
WAV2VEC: UNSUPERVISED PRE-TRAINING FOR SPEECH RECOGNITION: This describes a way to build a dictionary of audio tokens that is used in MusicLM
Wav2vec 2.0: Learning the structure of speech from raw audio: The 2nd iteration of wav2vec
W2v-BERT: Combining Contrastive Learning and Masked Language Modeling for Self-Supervised Speech Pre-Training: Masked inference language model method to learn audio tokens. This is what is actually used in MusicLM
MusicLM: Generating Music From Text: A model generating music audio from text descriptions such as “a calming violin melody backed by a distorted guitar riff”.
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?: This is the paper that Timnit Gebru and Margaret Mitchell got fired from Google’s Ethical AI team for publishing.
Alignment of Language Agents: This is Deep Mind’s critique of their own approach.
Extracting Training Data from Large Language Models: Did GPT-2 memorize a Harry Potter book? Read this and find out.
Quantifying Memorization Across Neural Language Models: Systematic experiments on how model size, prompt length, and frequency of an example in the training set impact our ability to extract memorized content.
Open AI’s analysis of GPT-4 potential harms: Worth a serious read
Reinforcement Learning: An Introduction: This is an entire book, but it is the one I learned RL from.
Policy Gradient Methods: Tutorial and New Frontiers: This is a video lecture that explains reinforcement learning policy grading methods. This is underlying tech used for training ChatGPT. Yes, this video is worth a “reading” credit. Yes, I started it 37 minutes into the lecture on purpose. You don’t have to watch the first half of the lecture.
Andrew K.s blog on Deep Reinforcement Learning: When combined with the video tutorial above, you’ll more-or-less understand policy gradient methods for deep reinforcement learning
Proximal Policy Optimization Algorithms: The paper that (mostly) explains the RL approach used in InstructGPT (the precursor to ChatGPT)
This blog on RL from human feedback: read the paper linked at the start of the blog. It teaches how to learn a reward function from human feedback, so you can do RL.
This blog on Aligning language models to follow instructions together explain how ChatGPT is fine-tuned to do prompt answering by combining proximal policy optimization and RL from human feedback (the two previous papers on this list).
Generative Adversarial Nets: The paper that introduced GANs
2016 Tutorial on Generative Adversarial Networks by one of the creators of the GAN. This one’s long, but good.
DCGAN: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks: This is an end-to-end model. Many papers build on this.
PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION This is used in StyleGAN and was state-of-the-art in 2018.
StyleGAN: A Style-Based Generator Architecture for Generative Adversarial Networks: As of 2019, this was the current state-of-the-art for GAN-based image generation.
StyleGAN2-ADA: Training Generative Adversarial Networks with Limited Data: : As of 2020, this was the current state-of-the-art for GAN-based image generation.
Cross-Modal Contrastive Learning for Text-to-Image Generation As of 2021, this was the best GAN for text-conditioned image generation. Note it’s use of contrastive loss. You’ll see that again in CLIP.
Adversarial Audio Synthesis: introduces WaveGAN, a paper about applying GANs to unsupervised synthesis of raw-waveform audio.
MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis: Doing speech synthesis with GANs.
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis: Even better speech synthesis with GANs.
The Deep Learning Book’s Chapters on Probability and Linear Algebra. Read these before the Easy Intro to KL Divergence
An Easy Introduction to Kullback-Leibler (KL) Divergence . Read this before reading about ELBO
Jenson’s Inequality (an example with code). Read this before reading about ELBO
Evidence Lower Bound, Clearly Explained: A video walking through Evidence Lower Bound.
A walkthrough of Evidence Lower Bound (ELBO): Lecture notes from David Blei, one of the inventors of ELBO. ELBO is what you optimize when you do variational inference in a VAE.
Categorical Reparameterization with Gumbel-Softmax: This is a way of allowing categorical latent variables in your model so you can run a differentiable gradient descent algorithm through them. This is used in Vector-Quantized VAEs.
Probabilistic Graphical Models: Lecture notes from the class taught at Stanford.
A starter blog on AutoEncoders and VAEs: Probably a good place to start.
From neural PCA to deep unsupervised learning : This paper introduces Ladder networks, which will come back when we get to VAEs
Tutorial on Variational Autoencoders: This is a walk-through of the math of VAEs. I think you should maybe start with this one.
Variational Inference, a Review for Statisticians: This explains the math behind variational inference. One of the authors is an inventor of variational inference.
An introduction to variational autoencoders: This is by the inventors of the VAE.
Tutorial: Deriving the Standard Variational Autoencoder (VAE) Loss Function: This is the only paper I’ve found that walks you through all the details to derive the actual loss function.
Conditional VAE: Learning Structured Output Representation using Deep Conditional Generative Models: Making a controllable VAE through conditioning
Beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework : This is about making disentangled representations: making the VAEs latent variables meaningful to us.
Isolating Sources of Disentanglement in VAEs: More on disentangled representations in VAEs
Ladder VAEs: Hierarchical VAEs
Adversarial Auto-Encoders: You can guess what this is.
A Wizard’s Guide to Adversarial Autoencoders: This is a multi-part tutorial that will be helpfup for understanding AAEs.
From Autoencoder to Beta-VAE: Lilian Weng’s overview of most kinds of autoencoders
MUSIC VAE: Learning Latent Representations of Music to Generate Interactive Musical Palettes: Making controllable music composition with VAEs
Jukebox: A Neural Net that Generates Music….with a combination of autoencoders and GANs
Accelerated antimicrobial discovery via deep generative models and molecular dynamics simulations: Using a WAE to generate new drugs
Deep unsupervised learning using nonequilibrium thermodynamics: The 2015 paper where diffusion models were introduced.
Denoising Diffusion Probabilistic Models: This was a break-out paper from 2020 that got people excited about diffusion models.
Generative Modeling by Estimating Gradients of the Data Distribution: This is a blog that explains how score-based models are also basically diffusion models.
An Introduction to Diffusion Models: A nice tutorial blog that has Pytorch code.
Guidance: a cheat code for diffusion models: if you want to understand DALL-E-2 and Imagen, you need to understand this.
DiffWave: A Versatile Diffusion Model for Audio Synthesis: A neural vocoder done with diffusion from 2021
Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise: Do we need to add noise at each step or would any transform do?
High Fidelity Image Generation Using Diffusion Models: A Google Blog that gives the chain of development that led to Imagen.
Google’s Imagen: This is the Pepsi to DALL-E-2’s Coke.
Diffusion Models Beat GANs on Image Synthesis: This paper describes many technical details used in the GLIDE paper…and therefore in DALL-E-2
GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models: This is paper that lays the groundwork for DALL-E-2.
Learning Transferable Visual Models From Natural Language Supervision: The CLIP representation. This is used in DALL-E-2.
Hierarchical Text-Conditional Image Generation with CLIP Latents: The DALL-E-2 paper.
Extracting Training Data from Diffusion Models: Exactly what it sounds like.
Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models: Just what it sounds like.
Stable Attribution is a project that seeks to identify which training data images contributed to an image generated by the Stable Diffusion generative model.
Efficiently Modeling Long Sequences with Structured State Spaces
The Annotated S4: This is a guided walk through (with code) of a structured state space model.
Competition-level code generation with AlphaCode: This beats 1/2 of all human entrants into a coding competition.
ChatGPT: Already perhaps the most famous chatbot and most famous language model and it has been out about 2 weeks as of this writing.
Riffusion: Repurposes StableDiffusion to generate spectrograms. Cool opensource project. They should have published this, too.
Toolformer: Language Models Can Teach Themselves to Use Tools: Meta researchers claim a transformer can learn to use APIs.