Skip to main content

Locomotion

Locomotion covers the controllers, learning pipelines, and reference platforms behind legged, bipedal, and humanoid movement. It spans model-based whole-body control, RL-trained neural controllers, perceptive locomotion over rough terrain, and the GPU-parallel training stacks that make modern sim-to-real locomotion practical.

From an engineering standpoint, locomotion is the category where sim-to-real works — modern RL policies for quadrupeds and humanoids transfer reliably when training is done with massive parallelism and well-tuned domain randomisation. The interesting risks have moved up the stack: robustness to terrain and disturbances, perceptive integration with cameras and depth, and the safety envelope around fast, dynamic motion near humans.

When choosing between approaches, weigh platform (quadruped vs. biped vs. humanoid), simulator and trainer (Isaac Lab + RSL-RL is the current default), and perceptive vs. blind locomotion based on the terrain you actually need. Reference platforms and open codebases matter disproportionately here: most production locomotion stacks are forks of a small number of well-maintained repositories.

Start here

Learning to Walk in Minutes (ETH/RSL) is the canonical entry point: GPU-parallel RL with a working sim-to-real pipeline for quadrupeds. It pairs naturally with RSL-RL (the PPO trainer) and legged_gym (the environments), both listed below.