Embarking on a journey through reinforcement learning can be akin to exploring a dense forest filled with intriguing paths and hidden treasures. But worry not! This guide aims to simplify your expedition, making the often-complex concepts of reinforcement learning more tangible and accessible. In this blog, we will cover everything from installation of notebooks to running them seamlessly. Let’s dive in!
Table of Contents
Notebooks Installation
This repository houses Jupyter Notebooks designed for your reinforcement learning journey. However, before you can embark on this adventure, you’ll need a few tools installed.
1. Install Git
To install Git, follow the instructions at this link.
2. Install Docker
Follow the instructions at this link to set up Docker.
3. Run Notebooks
We have two options to run the notebooks. Here’s a quick guide:
TL;DR Version
- Clone the repository:
git clone git@github.com:mimoralea/applied-reinforcement-learning.git - Change directory:
cd applied-reinforcement-learning - Pull and run Docker:
docker pull mimoralea/openai-gym:v1 && docker run -it --rm -p 8888:8888 -p 6006:6006 -v $PWD/notebooks:/mnt/notebooks mimoralea/openai-gym:v1
A Little More Detailed Version:
- Clone the repository to a desired location:
git clone git@github.com:mimoralea/applied-reinforcement-learning.git ~Projects/applied-reinforcement-learning - Change directory:
cd ~Projects/applied-reinforcement-learning - Build or pull the Docker container:
- To build:
docker build -t mimoralea/openai-gym:v1 . - To pull:
docker pull mimoralea/openai-gym:v1
- To build:
- Run the container:
docker run -it --rm -p 8888:8888 -p 6006:6006 -v $PWD/notebooks:/mnt/notebooks mimoralea/openai-gym:v1
Open the Notebooks in your browser at http://localhost:8888. For TensorBoard, visit http://localhost:6006 to visualize the Neural Network during your lessons.
Docker Tips
Here are some helpful commands to make your experience smoother:
- To access a bash session of a running container:
docker ps // to show running containers docker exec --user root -it/bin/bash - To start a new container instance directly into bash:
docker run -it --rm mimoralea/openai-gym:v1 /bin/bash
Donkey Analogy for Reinforcement Learning
Imagine you’re trying to train a donkey to follow a path and find the juiciest apples. The donkey represents your agent, and every time it makes a decision, it gets either closer or farther from its goal (the apples). In reinforcement learning, like with your donkey, the agent learns through trial and error.
- If the donkey chooses the right direction and finds apples, it receives a reward (like a sugary treat).
- If it heads the wrong way and encounters a thorn bush, it gets a negative reinforcement (a small prick, maybe!).
Over time, the donkey starts to associate its actions with the rewards and punishments, learning to approach the apples faster. This dynamic is much like how algorithms learn to make the best decisions based on outcomes.
Troubleshooting
Should you encounter any issues while setting up or running the notebooks, consider the following tips:
- Ensure all dependencies are installed properly. Sometimes, missing packages can stall your progress.
- If you have trouble pulling the Docker image, check your internet connection or try restarting Docker.
- To resolve access issues, make sure the ports (8888 and 6006) are not being used by other applications.
- For command-related errors, double-check your syntax and ensure that you’re in the correct directory.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.
So, whether you’re a seasoned programmer or a curious beginner, remember that with the right tools and community, the exciting realm of reinforcement learning is at your fingertips!

