
Welcome to the Robot Perception and Learning (RPL) Lab at the University of Texas at Austin! Our research focuses on two intimately connected research threads: Robotics and Embodied AI. We investigate the synergistic relations of perception and action in embodied agents and build intelligent algorithms that give rise to general-purpose robot autonomy.
In Robotics, we develop methods and mechanisms that enable autonomous robots to reason about the real world through their senses, to flexibly perform a wide range of tasks, and to adaptively learn new tasks. To deploy general-purpose robot autonomy in the wild, we have to deal with the variability and uncertainty of the unstructured environments. We address this challenge by closing the perception-action loop using robot perception and learning techniques. In Embodied AI, we build computational frameworks of embodied agents. In these frameworks, perception arises from an embodied agent's active, situated, and skillful interactions in the open world; and its ability to make sense of the world through the lenses of perception, in turn, facilitates intelligent behaviors.
Our work draws theories and methods from robotics, machine learning, and computer vision, along with inspirations from human cognition, neuroscience, and philosophy, to solve open problems at the forefront of Robotics and AI. We are always looking out for talented members to join our group.
Recent News
Yuke gave an early-career keynote talk at CoRL'23 on November 8th. See slides here.
We are co-organizing a CoRL'23 workshop on Towards Reliable and Deployable Learning-Based Robotic Systems.
Our RSS paper Sirius is nominated as a Best Paper Award Finalist at RSS 2023.
Talk recordings of our CVPR'23 workshop on 3D Vision and Robotics are released on our YouTube channel.
Our lab has four papers accepted at ICRA 2023, RSS 2023 and ICML 2023.
Our MineDojo work received the Outstanding Paper Award at NeurIPS 2022.