Getting Started with PantheonRL: Your Guide to Multi-Agent Reinforcement Learning

Jul 22, 2023 | Data Science

Welcome to PantheonRL, an incredible package designed for training and testing multi-agent reinforcement learning environments! Whether you’re fine-tuning agent policies or conducting lightweight experimentation, PantheonRL serves as a robust and modular tool that can help streamline your AI projects. Let’s delve into how you can set it up and make the most of its features!

Installation: A Step-by-Step Guide

To get PantheonRL up and running, follow these straightforward installation steps:

  • Optionally create a conda environment:
    conda create -n PantheonRL python=3.7
    conda activate PantheonRL
  • Downgrade setuptools for gym:
    pip install setuptools==65.5.0 wheel==0.40.0
  • Clone and install PantheonRL:
    git clone https://github.com/Stanford-ILIAD/PantheonRL.git
    cd PantheonRL
    pip install -e .
  • Optionally install Overcooked environment:
    git submodule update --init --recursive
    pip install -e overcookedgymhuman_aware_rl
    overcooked_ai
  • Optionally install PettingZoo environments:
    pip install pettingzoo
    pip install pettingzoo[classic]

Command Line Invocation

To start a training session with your agents, run:

python3 trainer.py LiarsDice-v0 PPO PPO --seed 10 --preset 1

If you installed Overcooked, use the following command:

python3 trainer.py OvercookedMultiEnv-v0 PPO PPO --env-config layout_name:simple --seed 10 --preset 1

Web User Interface

The user interface of PantheonRL allows you to manage training sessions easily. The first time you run the web interface, you’ll need to initialize the database:

export FLASK_APP=website
export FLASK_ENV=development
flask init-db

Finally, to start the web user interface:

flask run --host=0.0.0.0 --port=5000

Ensure ports 5000 and 5001 (for Tensorboard) are available.

Understanding How PantheonRL Works

Think of PantheonRL as a collaborative workshop where each agent is akin to a different craftsman working on separate projects. Each craftsman (agent) has their own toolbox (replay buffer and update algorithm) and skill set (update policies). Since they work independently yet collaboratively, modifications made by one craftsman can inspire changes in another. Just like in a workshop, the agents can be rearranged and paired in various ways – whether it’s self-play, round-robin training, or adapting to new partners. This design allows for monster creativity and innovation in solving tasks.

Troubleshooting: Common Issues and Solutions

If you encounter problems during installation or while running PantheonRL, here are some quick troubleshooting tips:

  • Installation Issues: Ensure you have installed Python 3.7 and necessary dependencies. Check your conda environment by running conda info --envs.
  • Web Interface Not Starting: Verify that ports 5000 and 5001 are free and not in use by other applications. You may need to stop services using these ports.
  • Environment Not Found: Make sure that the Overcooked or PettingZoo environments were correctly installed as per the earlier instructions.
  • Database Issues: If you face issues with the database, confirm the environment variables are correctly set and re-run the flask init-db command.
  • For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

With its unique features and flexibility, PantheonRL is your gateway to diving deep into multi-agent reinforcement learning. From customized training setups to the user-friendly web interface, PantheonRL expands the possibilities of what AI can achieve. At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox