How to Train an Autonomous Driving Agent with CARLA and Deep Reinforcement Learning

Aug 16, 2020 | Data Science

Welcome to the fascinating world of autonomous driving! In this article, we are going to guide you through the steps to train your very own driving agent using deep reinforcement learning, utilizing the Proximal Policy Optimization (PPO) algorithm within the CARLA simulator. Buckle up, as we delve into requirements, installation, examples, and much more.

Understanding the Basics

To explain how our driving agent operates, think of it as a student learning to drive a car. Initially, it starts with basic maneuvers in an empty parking lot before attempting more complex driving scenarios on busy streets. This gradual progression is facilitated by the notion of Curriculum Learning—each stage becomes increasingly challenging, helping the agent to build competence with each lesson.

Requirements

Before diving in, let’s make sure you have everything you need to get started:

  • Software:
    • Python 3.7
    • CARLA 0.9.9
    • Install necessary libraries from requirements.txt
  • Hardware (minimum):
    • CPU: at least quad or octa-core
    • GPU: dedicated, with as much memory as possible
    • RAM: at least 16 or 32 GB

Installation Instructions

Follow the steps below to set up your environment:

  1. Clone this repository:
    git clone https://github.com/Luca96/carla-driving-rl-agent.git
  2. Download CARLA 0.9.9 from their GitHub repository, available here. This contains precompiled binaries ready for use. For more information, refer to the carla-quickstart.
  3. Install CARLA Python bindings:

    Open your terminal and navigate to the CARLA directory based on your OS:

    • Windows:
      cd your-path-to-carla/CARLA_0.9.9.4/WindowsNoEditor/PythonAPI/carla/dist
    • Linux:
      cd your-path-to-carla/CARLA_0.9.9.4/PythonAPI/carla/dist

    Next, extract carla-0.9.9-py3.7-XXX-amd64.egg (where XXX is appropriate for your OS) and create setup.py with the following content:

    
    from distutils.core import setup
    
    setup(
        name='carla',
        version='0.9.9',
        py_modules=['carla']
    )
            

    Install via pip:

    pip install -e ~CARLA_0.9.9.4/PythonAPI/carla/dist/carla-0.9.9-py3.7-XXX-amd64
  4. Start CARLA:

    Before running the repository’s code, be sure to launch CARLA:

    • Windows: your-path-to/CARLA_0.9.9.4/WindowsNoEditor/CarlaUE4.exe
    • Linux: your-path-to/CARLA_0.9.9.4/CarlaUE4.sh (You can use flags like -windowed -ResX=32 -ResY=32 --quality-level=Low for reduced resource consumption)

Examples of Using the Agent

Now, let’s see how to interact with the trained agent:

  • Show the agent’s network architecture:
    
    python
    from core import CARLAgent, FakeCARLAEnvironment
    agent = CARLAgent(FakeCARLAEnvironment(), batch_size=1, log_mode=None)
    agent.summary()
            
  • Play with the CARLA environment (requires running CARLA):
    
    python
    from core import CARLAEnv
    from rl import CARLAPlayWrapper 
    
    env = CARLAEnv(debug=True, window_size=(900, 245), image_shape=(90, 120, 3))
    CARLAPlayWrapper(env).play()
            
  • Reinforcement learning example:
    
    python
    from core import learning
    learning.stage_s1(episodes=5, timesteps=256, gamma=0.999, lambda_=0.995, save_every=end,
                      stage_name=stage, seed=42, polyak=0.999, aug_intensity=0.0, 
                      repeat_action=1, load_full=False).run2(epochs=10)
            

Understanding Agent Architecture

The agent operates like an ensemble of mini-experts. At each timestep, it receives various inputs about the environment—think of these inputs as different experts advising the agent on driving. Each of these inputs feeds into neural networks that process them separately. Later, a Gated Recurrent Unit combines their insights into a cohesive response, which is further refined to decide the driving actions based on a linear activation function.

Results from Training

Upon testing, several metrics were evaluated including collision rate, similarity, and speed. These have been assessed across different scenarios and lighting conditions in CARLA, demonstrating how the agent adapts to each one.

Troubleshooting

If you encounter issues when implementing this project, consider the following:

  • Ensure all dependencies are installed correctly as per the installation instructions.
  • Verify that CARLA is running before executing the scripts.
  • Check if the paths are correctly set in your terminal.
  • Look for compatibility issues with your hardware specifications.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox