7.8 C
New York
Sunday, February 15, 2026

How can robots purchase expertise via interactions with the bodily world? An interview with Jiaheng Hu


One of many key challenges in constructing robots for family or industrial settings is the necessity to grasp the management of high-degree-of-freedom programs corresponding to cell manipulators. Reinforcement studying has been a promising avenue for buying robotic management insurance policies, nevertheless, scaling to complicated programs has proved tough. Of their work SLAC: Simulation-Pretrained Latent Motion House for Complete-Physique Actual-World RL, Jiaheng Hu, Peter Stone and Roberto Martín-Martín introduce a technique that renders real-world reinforcement studying possible for complicated embodiments. We caught up with Jiaheng to search out out extra.

What’s the subject of the analysis in your paper and why is it an fascinating space for research?

This paper is about how robots (specifically, family robots like cell manipulators) can autonomously purchase expertise by way of interacting with the bodily world (i.e. real-world reinforcement studying). Reinforcement studying (RL) is a basic studying framework for studying from trial-and-error interplay with an atmosphere, and has big potential in permitting robots to study duties with out people hand-engineering the answer. RL for robotics is a really thrilling discipline, as it might open prospects for robots to self-improve in a scalable approach, in direction of the creation of general-purpose family robots that may help folks in our on a regular basis lives.

What had been a few of the points with earlier strategies that your paper was attempting to handle?

Beforehand, many of the profitable purposes of RL to robotics had been achieved by coaching fully in simulation, then deploying the coverage within the real-world instantly (i.e. zero-shot sim2real). Nevertheless, such a technique has large limitations: on one hand, it’s not very scalable, as you could create task-specific, high-fidelity simulation environments that extremely match the real-world atmosphere that you just need to deploy the robotic in, and this could usually take days or months for every job. Then again, some duties are literally very laborious to simulate, as they contain deformable objects and contact-rich interactions (for instance, pouring water, folding garments, wiping whiteboard). For these duties, the simulation is commonly fairly completely different from the actual world. That is the place real-world RL comes into play: if we are able to permit a robotic to study by instantly interacting with the bodily world, we don’t want a simulator anymore. Nevertheless, whereas a number of makes an attempt have been made in direction of realizing real-world RL, it’s really a really laborious downside since: 1. Pattern-inefficiency: RL requires a number of samples (i.e. interplay with the atmosphere) to study good conduct, which is commonly unattainable to gather in massive portions within the real-world. 2. Security Points: RL requires exploration, and random exploration within the real-world is commonly very very harmful. The robotic can break itself and can by no means be capable to get well from that.

May you inform us concerning the technique (SLAC) that you just’ve launched?

So, creating high-fidelity simulations may be very laborious, and instantly studying within the real-world can also be actually laborious. What ought to we do? The important thing concept of SLAC is that we are able to use a low-fidelity simulation atmosphere to help subsequent real-world RL. Particularly, SLAC implements this concept in a two-step course of: in step one, SLAC learns a latent motion area in simulation by way of unsupervised reinforcement studying. Unsupervised RL is a method that enables the robotic to discover a given atmosphere and study task-agnostic behaviors. In SLAC, we design a particular unsupervised RL goal that encourages these behaviors to be secure and structured.

Within the second step, we deal with these discovered behaviors as the brand new motion area of the robotic, the place the robotic does real-world RL for downstream duties corresponding to wiping whiteboards by making selections on this new motion area. Importantly, this technique permit us to avoid the 2 largest downside of real-world RL: we don’t have to fret about issues of safety because the new motion area is pretrained to be all the time secure; and we are able to study in a sample-efficient approach as a result of our new motion area is skilled to be very structured.

The robotic finishing up the duty of wiping a whiteboard.

How did you go about testing and evaluating your technique, and what had been a few of the key outcomes?

We check our strategies on an actual Tiago robotic – a excessive degrees-of-freedom, bi-manual cell manipulation, on a sequence of very difficult real-world duties, together with wiping a big whiteboard, cleansing a desk, and sweeping trash right into a bag. These duties are difficult from three features: 1. They’re visuo-motor duties that require processing of high-dimensional picture info. 2. They require the whole-body movement of the robotic (i.e. controlling many degrees-of-freedom on the similar time), and three. They’re contact-rich, which makes it laborious to simulate precisely. On all of those duties, our technique permits us to study high-performance insurance policies (>80% success charge) inside an hour of real-world interactions. By comparability, earlier strategies merely can’t resolve the duty, and sometimes danger breaking the robotic. So to summarize, beforehand it was merely not potential to unravel these duties by way of real-world RL, and our technique has made it potential.

What are your plans for future work?

I feel there may be nonetheless much more to do on the intersection of RL and robotics. My eventual aim is to create actually self-improving robots that may study fully by themselves with none human involvement. Extra not too long ago, I’ve been keen on how we are able to leverage basis fashions corresponding to vision-language fashions (VLMs) and vision-language-action fashions (VLAs) to additional automate the self-improvement loop.

About Jiaheng

Jiaheng Hu is a 4th-year PhD pupil at UT-Austin, co-advised by Prof. Peter Stone and Prof. Roberto Martín-Martín. His analysis curiosity is in Robotic Studying and Reinforcement Studying, with the long-term aim of creating self-improving robots that may study and adapt autonomously in unstructured environments. Jiaheng’s work has been revealed at top-tier Robotics and ML venues, together with CoRL, NeurIPS, RSS, and ICRA, and has earned a number of finest paper nominations and awards. Throughout his PhD, he interned at Google DeepMind and Ai2, and is a recipient of the Two Sigma PhD Fellowship.

Learn the work in full

SLAC: Simulation-Pretrained Latent Motion House for Complete-Physique Actual-World RL, Jiaheng Hu, Peter Stone, Roberto Martín-Martín.




AIhub
is a non-profit devoted to connecting the AI neighborhood to the general public by offering free, high-quality info in AI.

AIhub
is a non-profit devoted to connecting the AI neighborhood to the general public by offering free, high-quality info in AI.




Lucy Smith
is Senior Managing Editor for Robohub and AIhub.

Lucy Smith
is Senior Managing Editor for Robohub and AIhub.

Related Articles

Latest Articles