Research

Robot Learning Reading Group

RPL YouTube Channel


Talks and Tutorials

You can learn more about our recent research from our talks and tutorials.

  • Pathway to Generalist Robots: Scaling Law, Data Flywheel, and Humanlike Embodiment. CoRL 2023 Early Career Keynote, November 2023. (website, video, slides)

  • The Data Pyramid for Building Generalist Agents. MIT Embodied Intelligence Seminar, December 2022. (website, video, slides)

  • Objects, Skills, and the Quest for Compositional Robot Autonomy. Stanford Robotics Seminar, February 2022. (website, video, slides)

  • Visual Affordance Learning for Robot Manipulation. Toyota Research Institute, August 2021. (slides)

  • Visual Imitation Learning: Generalization, Perceptual Grounding, and Abstraction. RSS’20 Workshop on Advances & Challenges in Imitation Learning for Robotics, July 2020. (workshop, slides)

  • Building General-Purpose Robot Autonomy: A Progressive Roadmap. Samsung Forum, June 2020. (video, slides)

  • Learning Keypoint Representations for Robot Manipulation. IROS’19 Workshop on Learning Representations for Planning and Control, November 2019. (workshop, slides)

  • Learning How-To Knowledge from the Web. IROS’19 Workshop on the Applications of Knowledge Representation and Semantic Technologies in Robotics, November 2019. (workshop, slides)

  • Closing the Perception-Action Loop: Towards General-Purpose Robot Autonomy. Stanford Ph.D. Defense, August 2019. (dissertation, slides)


Open-Source Software & Data

We devote effort to making scientific research more reproducible and making knowledge accessible to a broader population. Open-sourcing research software and datasets is one of our key practices. You can find open-source code and data from our research on the Publications page or on our GitHub. We highlight some public resources below:

  • MineDojo: building open-ended embodied agents with Internet-scale knowledge

  • Deoxys: modular, real-time controller library for Franka Emika Panda arms

  • robomimic: open-source framework for robot learning from demonstration

  • robosuite: modular simulation framework and benchmark for robot learning

  • RoboTurk: large-scale crowdsourced teleoperation dataset for robotic imitation learning

  • SURREAL: distributed reinforcement learning framework and robot manipulation benchmark

  • AI2-THOR: open-source interactive environments for embodied AI

  • Visual Genome: visual knowledge base that connects structured image concepts to language


Selected Media Coverage

NVIDIA, October 20, 2023
AI agent uses LLMs to automatically generate reward algorithms to train robots to accomplish complex tasks.

Yahoo! News, June 15, 2023
The video aims to demonstrate DRACO 3's ability to prepare a meal using virtual reality (VR) teleoperation, a technical term used in robotics that describes a human operator controlling a robot remotely.

TechCrunch, June 2, 2023
AI researchers have built a Minecraft bot that can explore and expand its capabilities in the game's open world — but unlike other bots, this one basically wrote its own code through trial and error and lots of GPT-4 queries.

IEEE Spectrum, February 2, 2021
A variety of new tests seeks to assess AI's ability to reason and learn concepts.

VentureBeat, October 6, 2020
AI researchers say they’ve created a framework for controlling four-legged robots that promises better energy efficiency and adaptability than more traditional model-based gait control of robotic legs.

Tech Xplore, November 21, 2018
In the future, RoboTurk could become a key resource in the field of robotics, aiding the development of more advanced and better performing robots.

Stanford News, October 26, 2018
With a smartphone and a browser, people worldwide will be able to interact with a robot to speed the process of teaching robots how to do basic tasks.

NVIDIA, April 3, 2018
Robots learning to do things by watching how humans do it? That’s the future.

Digital Trends, February 19, 2018
Robots are getting better at dealing with the complexity of the real world, but they still need a helping hand when taking their first tentative steps outside of easily defined lab conditions.

MIT Technology Review, February 16, 2018
A new digital training ground that replicates an average home lets AI learn how to do simple chores like slicing apples, making beds, or carrying drinks in a low-stakes environment.

IEEE Spectrum, February 15, 2018
AI2-THOR, an interactive simulation based on home environments, can prepare AI for real-world challenges.

MIT Technology Review, January 26, 2016
A new database will gauge progress in artificial intelligence, as computers try to grasp what’s going on in scenes shown in photographs.