If you’re looking to create a fast simulation and reinforcement learning (RL) training framework for a quadruped locomotion task, you’ve come to the right place! This guide will provide you with all the steps necessary to set up and run an RL MPC (Model Predictive Control) locomotion system using a dynamic weight prediction mechanism. Let’s dive in!
Overview of the Framework
This project encompasses an innovative ML framework primarily focused on achieving efficient locomotion for quadruped robots, specifically utilizing a hierarchical control mechanism. At the top tier, a high-level policy network is responsible for decision-making, while the lower tier employs an MPC controller resembling but rewritten in Python from the original Cheetah software.
Dependencies
- Python 3.8
- PyTorch 1.10.0 with CUDA 11.3
- Isaac Gym Preview 4
- OSQP 0.6.2
Installation Steps
- Clone the repository:
git clone git@github.com:silvery107/rl-mpc-locomotion.git
- Initialize the submodules:
git submodule update --init
- Create the conda environment:
conda env create -f environment.yml
- Install rsl_rl at specific commit under the extern folder:
cd extern/rsl_rl pip install -e .
- Compile Python binding of the MPC solver:
pip install -e .
Quick Start Guide
1. Play the MPC Controller on Aliengo:
python RL_MPC_Locomotion.py --robot=Aliengo
To control the robot, ensure to plug in an Xbox-like gamepad or pass the –disable-gamepad argument.
2. Train a New Policy:
cd RL_Environment
python train.py task=Aliengo headless=False
You can toggle viewer updates with the v key. Set headless=True to train without rendering. Tensorboard support is included, run tensorboard --logdir runs
to visualize metrics.
3. Load a Pretrained Checkpoint:
python train.py task=Aliengo checkpoint=runs/Aliengo_nn/Aliengo.pth test=True num_envs=4
4. Run the Pretrained Weight-Policy for MPC Controller on Aliengo:
Set bridge_MPC_to_RL to False in MPC_ControllerParameters.py
python RL_MPC_Locomotion.py --robot=Aliengo --mode=Policy --checkpoint=pathtockpt
If no checkpoint path is provided, it will load the latest model automatically.
Understanding the Code
Think of the RL MPC Locomotion framework as a well-organized kitchen with a chef (the high-level policy network) and sous-chefs (the lower-level MPC controller) working together to create a delicious meal (the final output). The chef decides the overall approach, while the sous-chefs handle specific tasks like chopping and mixing (controlling the robot’s movements) efficiently and dynamically. Each sous-chef receives direct orders from the chef based on the current situation, tweaking their methods to meet the chef’s vision perfectly!
Troubleshooting
If you encounter any issues during installation or execution, consider the following troubleshooting ideas:
- Ensure you have the correct version of dependencies installed as specified.
- Check for any error messages in the terminal, which can guide you on missing packages or syntax errors in scripts.
- Verify the connection of the gamepad and its compatibility with the provided controls.
- If the simulation doesn’t work as expected, review the settings for Isaac Gym and submodule integrations.
- For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Conclusion
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations. Happy coding!