This course brings together many disciplines of Artificial Intelligence (including computer vision, robot control, reinforcement learning, language understanding) to show how to develop intelligent agents that can learn to sense the world and learn to act by imitating others, maximizing sparse rewards, and/or satisfying their curiosity.

Upon completion of this course, you should be able to

  • Implement and experiment with existing state-of-the-art methods for learning behavioral policies supervised by reinforcement, demonstrations and/or intrinsic curiosity.
  • Evaluate the sample complexity, generalization and generality of these algorithms.
  • Understand research papers in the field of robotic learning.
  • Try out ideas/extensions on existing methods.

  • Time: Monday/Wednesday/Friday 12:00-1:20 pm
  • Location: Gates-Hillman Center 4401
  • Discussion: Piazza
  • HW submission: Gradescope and Autolab
  • Online lectures: The lectures will be live-streamed through Panopto and recorded as well.
  • Contact: For external enquiries, personal matters or in emergencies, you can email us at ta-deeprl@lists.andrew.cmu.edu.