The Carnegie Mellon University team’s teleoperation system is based on reinforcement learning and features scalable retargeting and training facilitated by vast human motion datasets.
Owing to their physical resemblance to people, humanoid robots offer a unique potential for real-time teleoperation. The team aimed to use an RGB camera to translate human gestures into humanoid behaviors in real time. Additionally, this technology may make it possible to collect extensive and high-quality human operation data for robots, where imitation learning can be applied to human-teleoperated tasks.
A viable substitute is offered by recent developments in reinforcement learning for humanoid control. Firstly, reinforcement learning has been applied in the graphics community to produce intricate human movements, carry out various tasks, and follow real-time human motions recorded by a webcam in simulation.
Utilizing a comprehensive full-body motion imitator akin to the perpetual humanoid controller , the team proposes to train and seamlessly transition to real-world deployment with zero-shot learning.empowers it to achieve real-time teleoperation of humanoid robots through a human operator and a simple webcam interface.