Welcome to the Robot Perception and Learning (RPL) Lab at the University of Texas at Austin! Our research focuses on two intimately connected research threads: Robotics and Embodied AI. We investigate the synergistic relations of perception and action in embodied agents and build algorithms and systems that give rise to general-purpose robot autonomy.

In Robotics, we develop methods and principles that enable autonomous robots to reason about the real world through their senses, perform a wide range of tasks flexibly, and learn new tasks adaptively. To deploy generalist robots in the wild, we must deal with the variability and uncertainty of unstructured environments. We address this challenge by closing the perception-action loop using robot perception and learning techniques. In Embodied AI, we build computational frameworks of embodied agents. In these frameworks, perception arises from an embodied agent's active, situated, and skillful interactions in the open world, and its ability to make sense of the world through the lenses of perception, in turn, facilitates intelligent behaviors.

Our work draws theories and methods from robotics, machine learning, and computer vision, along with inspirations from human cognition, neuroscience, and philosophy, to solve open problems at the forefront of Robotics and AI. We are always looking for talented members to join our group.


Recent News

  • We released robosuite v1.5, supporting diverse robot embodiments such as humanoids and better controller designs.

  • Three papers are accepted at CoRL 2024, including one oral presentation for OKAMI.

  • We released RoboCasa, a large-scale simulation framework of everyday tasks for generalist robots.

  • We have four papers accepted at ICRA 2024 and four papers at RSS 2024.

  • Yuke gave an early-career keynote talk at CoRL'23 on November 8th. See slides and video here.