This course brings together many disciplines of Artificial Intelligence (including computer vision, robot control, reinforcement learning, language understanding) to show how to develop intelligent agents that can learn to sense the world and learn to act by imitating others, maximizing sparse rewards, and/or satisfying their curiosity.

Course Goals:

Upon completion of the course students should be able to:

  • Implement and experiment with existing state-of-the-art methods for learning behavioral policies supervised by reinforcement, demonstrations and/or intrinsic curiosity.
  • Evaluate the sample complexity, generalization and generality of these algorithms.
  • Understand research papers in the field of robotic learning.
  • Try out ideas/extensions on existing methods.

Prerequisite Knowledge:

Students should have a solid understanding of the following areas

  • Algorithms: e.g., What problem does Dijkstra’s algorithm solve?
  • Probability: e.g., What is Bayes rule? How do you normalize a distribution?
  • Computer vision: convolutional networks, object detection architectures, LSTMs, attention models
  • Deep Learning: familiarity with TensorFlow and/or Pytorch.
  • Matrix Calculus: e.g., What are derivatives of matrix-matrix and matrix-vector products? What is the multivariate chain rule?
  • Programming: e.g., What are classes and inheritance? How do you structure read data from files? How do you plot figures to visualize results?
  • Numerical programming: e.g., How would you perform an elementwise product instead of an inner product? How do you invert a matrix?

Prerequisites:

  • Prerequisites: 10601 Introduction to Machine Learning
  • Minimum Grades: C in 10601
  • Corequisites: None
  • Anti-requisites: None
  • Anti-req Prohibits: None

  • Lectures: Monday, Wednesday 12:30 AM - 1:50 PM
  • Recitations: Friday 12:30 AM - 1:50 PM
  • Lecture/Recitation Location: GHC 4215
  • Discussion: Piazza
  • HW submission: Gradescope