Top | Calendar | Links | Slides | Readings |
Machine Learning is the study of algorithms that improve automatically through experience. Topics covered typically include Bayesian learning, decision trees, Support Vector Machines, Reinforcement Learning, Markov models and neural networks.
Lecture: Monday, Wedensday 3:00PM - 4:20PM Tech L211
Recitation Fri 3:00PM - 3:50PM Tech L221 (Kim), Tech L251 (Pishdadian), Tech L168 (EECS Doctoral)
Prof. Bryan Pardo Office Hours: Ford Building, Room 3-323, Mon 4:30 - 5:30pm
Bongjun Kim Office Hours: Ford Building, Room 3.317 (West Lounge), Tue 3:30pm-5:30pm
Fatemeh Pishdadian Office Hours: Ford Building, Room 3.317 (West Lounge), Fri 1:00 - 3:00pm
Grading: You can earn 110 points. Every test & assignment is worth 10 points. You’re graded on a basis of 100 points. In other words… 93-100 is an A, 90 - 92 is an A-, 87-89 is a B+, 83-86 is a B, 80-82 is a B-…and so on.
Extra Credit: The final homework is an extra credit assignment. No other extra credit will be assigned.
Late Policy: Assignments are due on Canvas by 11:59pm on the due date. Canvas is the only way assignments are accepted. Late assignments are docked 2 points per day, starting IMMEDIATELY. For example, an assignment handed in at 12:00am the next day has 2 points removed. An assignment that is 3 days late will have 6 points removed from the final grade.
Cheating & Academic Dishonesty: Do your own work. Academic dishonesty will be dealt with as laid out in the student handbook. Penalties include failing the class and can be more severe than that. If you have a question about whether something may be considered cheating, ask, prior to submitting your work.
Attendance is not graded.
Announcements and discussions will take place on the course Piazza page. You can sign up for the page here.
Week | Date | Topic | Assigned | Due | Points |
---|---|---|---|---|---|
1 | Wed Sep 20 | Decision Trees | HW 1: Decision trees | ||
1 | Fri Sep 22 | Python 2.7 | |||
2 | Mon Sep 25 | Measuring Distance | |||
2 | Wed Sep 27 | Nearest Neighbor Class | |||
2 | Fri Sep 29 | TBD | HW 2: KNNs | HW 1 | 10 |
3 | Mon Oct 2 | Linear regression | |||
3 | Wed Oct 4 | Linear discriminants | |||
3 | Fri Oct 6 | TBD | HW 3: Regression | HW 2 | 10 |
4 | Mon Oct 9 | Support Vector Machines | |||
4 | Wed Oct 11 | Support Vector Machines | |||
4 | Fri Oct 13 | Midterm preparation | HW 3 | 10 | |
5 | Mon Oct 16 | MIDTERM | MIDTERM | 10 | |
5 | Wed Oct 18 | Collaborative Filtering | |||
5 | Fri Oct 20 | NO CLASS | HW 4: Collaborative Filters | ||
6 | Mon Oct 23 | Naive Bayesian Classifiers | |||
6 | Wed Oct 25 | Experimental Validation | |||
6 | Fri Oct 27 | TBD | HW 5: Naive Bayes | HW 4 | 10 |
7 | Mon Oct 30 | Expectation Maximization | |||
7 | Wed Nov 1 | Gaussian Mixture Models | |||
7 | Fri Nov 3 | TBD | HW 6: GMMs | HW 5 | 10 |
8 | Mon Nov 6 | Reinforcement Learning | |||
8 | Wed Nov 8 | Reinforcement Learning | |||
8 | Fri Nov 10 | scikit-learn | HW 7: RL | HW 6 | 10 |
9 | Mon Nov 13 | Neural Networks | |||
9 | Wed Nov 15 | Neural Networks | |||
9 | Fri Nov 17 | tensorflow | HW 8: Neural Networks | HW 7 | 10 |
10 | Mon Nov 20 | Neural Networks | |||
10 | Wed Nov 22 | Neural Networks | |||
10 | Fri Nov 24 | NO CLASS | HW 9: Extra Credit | ||
11 | Mon Nov 27 | Boosting | HW 8 | 10 | |
11 | Wed Nov 29 | Active Learning | |||
11 | Fri Dec 1 | Final Preparation | |||
12 | Mon Dec 4 | HW 9 | 10 | ||
12 | Fri Dec 8 | Final exam (3-5pm) | 10 |
Anaconda: The most popular python distro for machine learning
Scikit Learn: the most popular machine learning python package
Tensorflow: the most popular python DNN package
Keras: A nice python API for Tensorflow
My guide to installing Keras and Tensorflow on MacOS
working through an ID3 example
The Wikipedia article on distance
Section 2.2.3 of An Introduction to Statistical Learning
The String-to-string Correction Problem
Sections 3.1, 3.2, 4.2, 4.4 of An Introduction to Statistical Learning
A Tutorial on Support Vector Machines
Chapter 9 of An Introduction to Statistical Learning
Chapter 2 of Recommender Systems: An Introduction
** STILL LOOKING FOR SOME GOOD READING ON NAIVE BAYES HERE ***
EM Demystified: An Expectation-Maximization Tutorial
A Tutorial on Hidden Markov Models
Chapters 3 and 6 of Reinforcement Learning: An Introduction
Chapter 1 of Parallel Distributed Processing
Blog: A hacker’s guide to neural nets
A Brief Introduction to Boosting
Improving Generalization with Active Learning
Top | Calendar | Links | Slides | Readings |