Robot Learning Reading Group

RPL YouTube Channel

Talks and Tutorials

You can learn more about our recent research from our talks and tutorials.

  • The Data Pyramid for Building Generalist Agents. MIT Embodied Intelligence Seminar, December 2022. (website, video, slides)

  • Objects, Skills, and the Quest for Compositional Robot Autonomy. Stanford Robotics Seminar, February 2022. (website, video, slides)

  • Visual Affordance Learning for Robot Manipulation. Toyota Research Institute, August 2021. (slides)

  • Visual Imitation Learning: Generalization, Perceptual Grounding, and Abstraction. RSS’20 Workshop on Advances & Challenges in Imitation Learning for Robotics, July 2020. (workshop, slides)

  • Building General-Purpose Robot Autonomy: A Progressive Roadmap. Samsung Forum, June 2020. (video, slides)

  • Learning Keypoint Representations for Robot Manipulation. IROS’19 Workshop on Learning Representations for Planning and Control, November 2019. (workshop, slides)

  • Learning How-To Knowledge from the Web. IROS’19 Workshop on the Applications of Knowledge Representation and Semantic Technologies in Robotics, November 2019. (workshop, slides)

  • Closing the Perception-Action Loop: Towards General-Purpose Robot Autonomy. Stanford Ph.D. Defense, August 2019. (dissertation, slides)

Open-Source Software & Data

We devote effort to making scientific research more reproducible and making knowledge accessible to a broader population. Open-sourcing research software and datasets is one of our key practices. You can find open-source code and data out of our research in the Publications page or on our GitHub. We highlight some public resources below:

  • MineDojo: building open-ended embodied agents with Internet-scale knowledge

  • Deoxys: modular, real-time controller library for Franka Emika Panda arms

  • robomimic: open-source framework for robot learning from demonstration

  • robosuite: modular simulation framework and benchmark for robot learning

  • RoboTurk: large-scale crowdsourced teleoperation dataset for robotic imitation learning

  • SURREAL: distributed reinforcement learning framework and robot manipulation benchmark

  • AI2-THOR: open-source interactive environments for embodied AI

  • Visual Genome: visual knowledge base that connects structured image concepts to language

Media Coverage (Selected)

IEEE Spectrum, February 2, 2021
A variety of new tests seeks to assess AI's ability to reason and learn concepts.

VentureBeat, October 06, 2020
AI researchers say they’ve created a framework for controlling four-legged robots that promises better energy efficiency and adaptability than more traditional model-based gait control of robotic legs.

Tech Xplore, November 21, 2018
In the future, RoboTurk could become a key resource in the field of robotics, aiding the development of more advanced and better performing robots.

Stanford News, October 26, 2018
With a smartphone and a browser, people worldwide will be able to interact with a robot to speed the process of teaching robots how to do basic tasks.

NVIDIA, April 3, 2018
Robots learning to do things by watching how humans do it? That’s the future.

Digital Trends, February 19, 2018
Robots are getting better at dealing with the complexity of the real world, but they still need a helping hand when taking their first tentative steps outside of easily defined lab conditions.

MIT Technology Review, February 16, 2018
A new digital training ground that replicates an average home lets AI learn how to do simple chores like slicing apples, making beds, or carrying drinks in a low-stakes environment.

IEEE Spectrum, February 15, 2018
AI2-THOR, an interactive simulation based on home environments, can prepare AI for real-world challenges.

MIT Technology Review, January 26, 2016
A new database will gauge progress in artificial intelligence, as computers try to grasp what’s going on in scenes shown in photographs.