Computational creativity is a multidisciplinary field that lies at the intersection of artificial intelligence, cognitive psychology, philosophy, and the arts. The field is concerned with the theoretical and practical issues in the study of creativity. The goal of computational creativity is to achieve one of the following:
To construct a program or computer capable of human-level creativity.
To better understand creativity and to formulate an algorithmic perspective on creative behavior.
To design programs that can enhance human creativity without necessarily being creative themselves.
In this course, students will read and discuss theoretical writings on the nature and proper definition of creativity. They will also read about and experiment with existing computational creativity systems (e.g. David Cope’s Experiments in Musical Intelligence and Google’s Project Magenta). In parallel, they will perform practical work implementing systems aimed at achieving one of the three goals listed above.
Week | Date | Topic | Due |
---|---|---|---|
1 | Sep 26 | Basics of deep nets. Can computers create art? | |
2 | Oct 3 | What is Creativity? Who owns creative works? | 5 reviews |
3 | Oct 10 | Algorithmic music composition | 4 reviews |
4 | Oct 17 | Algorithmic music composition | 4 reviews |
5 | Oct 24 | No class: prepare your proposal | 4 reviews |
6 | Oct 31 | Algorithmic image generation | initial proposal |
7 | Nov 7 | Algorithmic image generation | Project plan |
8 | Nov 14 | Cross-modal generation | 4 reviews |
9 | Nov 21 | Text and story generation | 4 reviews |
10 | Nov 28 | No class: Thanksgiving | |
11 | Dec 5 | Support rather than supplant? | project website |
12 | Dec 10 (9-11am) | Final project presentation | final presentation |
You will submit 25 reviews of readings/videos/music from the course website. Each will be a single-page reaction to something you read/watched/heard from the links provided below. Each review will be worth 2 points. Reviews are due on the schedule shown in the course calendar.
Once during the course of the term, you will be the lead person discussing the reading in class. This will mean you haven’t just read the paper, but you’ve read related work, really understand it and can give a brief presentation of the paper (including slides) and then lead a discussion about it. (10 points)
Each week (even weeks when you’re not presenting) you are expected to show up, have read the papers and be able to discuss ideas. Every week you show up and substantially contribute to the discussion, you get 1 point. If you don’t show up or you don’t say anything that week, you don’t get the point.
You will make, modify, and or analyze some work or project in computational creativity. This may mean modifying MusicVAE or making a simple story generator of your own. It may mean downloading an existing thing and experimenting with it or it may mean building a new thing. It may mean making a program that analyses creativity or a creativity aid…or something I haven’t been able to come up with. The point breakdown for the project is as follows
Recent student projects can be found here.
Chapter 4 from Machine Learning
Convolutional Networks for Images, Speech, and Time-Series
Slides of the NeurIPS GAN tutorial
On the Future of Computers and Creativity
The “Can Computers Create Art?” Talk on video
Deepfake Salvador Dalí takes selfies with museum visitors
Max Morrison: What is KL Divergence?
Andong Li Zhao: Approaches to Measuring Creativity: A Systematic Literature Review
Max Morrison: Quantifying the Creativity Support of Digital Tools through the Creativity Support Index
Brandon Harris: Monkey Selfie Copyright Dispute
The Standard Definition of Creativity
Artificial Intelligence and Music: Open Questions of Copyright Law and Engineering Praxis
Come to Andrew McPherson’s talk and react to that: 3rd floor lecture room, Mudd building. Noon, Oct 9.
Andreas Bugler CHAPTER 4 BOOK: Formalized Music Thought and Mathematics in Composition - Xenakis ** You can review up to 3 chapters of this book. Each chapter is worth 1 point.**
Connor Bain David Cope and Experiements in Musical Intelligence
A Universal Music Translation Network
The output of Facebooks Universal Music Translation Network
S Dadabots. This is a project where deep nets are generating entire albums of content. Note there are several papers an albums on this site. If a student presenter wants to pick one, I’m open to that.
Jacob Kelter A tutorial on Variational Autoencoders
Stretch reading goal: A more in depth tutorial on Variational Autoencoders (academic)
Learning Latent Representations of Music to Generate Interactive Musical Palettes This is the MusicVAE paper.
The ISMIR 2019 tutorial on generating music with GANs
Alexander Fang DeepBach: a Steerable Model for Bach Chorales Generation
Music Generation by Deep Learning - Challenges and Directions
Deep Learning Techniques for Music Generation – A Survey
DeepBach Example Output & Code
Jack Wiig Coloring without seeeing: A problem in machine creativity
Kevin Chan Painterly Rendering for Video and Interaction
Cooper Barth The video-to-video paper
A video tutorial on video generation with GANs
A Neural Algorithm of Artistic Style
Marko Sterbentz Image Style Transfer Using Convolutional Neural Networks
Thomas Young GANGogh: Creating Art with GANs
Google’s psychedelic ‘paint brush’ raises the oldest question in art
The deep dream image generator
A Practical guide to build your first Deep Dream Experience
S Generative Adversarial Text-to-image Synthesis
S StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks
A video explanation of StackGAN
Jayden Soni Conditional LSTM-GAN for Melody Generation from Lyrics
S Algorithmic Songwriting with ALYSIA
Gabriel Caniglia Image-to-image translation with conditional adversarial networks
Sourcecode for Conditional LSTM-GAN for Melody Generation from Lyrics
Demo of Pix2Pix network from the Image-to-Image Translation paper
S Attention models: AKA Transformer networks: Attention is all you need (academic)
Sarah Ahmad Narrative Planning: Balancing Plot and Character
Lisa Cox Event representations for automated story generation with deep neural nets
Alisa LiuRationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations
The Shape of Games to Come: Critical Digital Storytelling in the Era of Communicative Capitalism This is a dissertation. So it’s long.
Popular press on Open AI’s writing system
The writer in the machine: Automatic story generation
AI Wrote a Road Trip Novel. Is it a good read?
You can buy ‘1 the Road’here
Open AI’s blog about the GPT-2 Language Model
The actual Open AI GPT-2 Language Model Talk to Transformer lets you write a starting sentence and see a paragraph of GPT-2 text generated in response.
Sunspring: A 9 minute movie whose script was written by the “Jetson” LSTM
Siddhartha Pamidighantam Fashion++: Minimal Edits for Outfit Improvement
Katherine O’Toole Learning to build Natural Audio Production Interfaces
S (iGAN) Generative Visual Manipulation on the Natural Image Manifold
Anaconda is the most popular Python distribution for ML work.
Jupyter is the most popular notebok and visualization framework for python ML development.
Scikit learn is the most popular general ML framework.
The Pytorch homepage gets you started on the easiest of the popular deep learning frameworks. It has source code, blogs, tutorials.
The tensorflow homepage gets you started on the harder popular deep learning framework. It has source code, blogs, tutorials.