How to Set Up Deep Reinforcement Learning for Grid Control with RLGC

Jul 12, 2022 | Data Science

Welcome to the Reinforcement Learning for Grid Control (RLGC) project, where the realms of power systems and deep reinforcement learning converge. In this guide, we’ll walk you through the setup process, ensuring you can leverage the InterPSS simulation platform for control and decision-making problems in power systems.

Environment Setup

Before diving into training your models, you need to prepare your environment. Here’s how to get started:

  • Ensure you have Python 3.5 or above and Java 8 installed. A Unix-based OS is recommended for optimal performance.
  • Using Anaconda is advisable for creating your virtual environment.

Cloning the Project

First, let’s clone the RLGC repository:

git clone https://github.com/RLGC-Project/RLGC.git

Creating Your Virtual Environment

You have two options for setting up your development environment:

  • If you’d like to use our provided environment, navigate to the RLGC folder and run:
  • cd RLGC
    conda env create -f environment.yml
  • Alternatively, you can create your custom environment with the necessary dependencies:
  • cd RLGC
    conda env create --name your-env-name

Troubleshooting Installation Issues

If you encounter issues related to OpenAI gym, you may need to install some additional packages. For instance, on an Ubuntu machine, you can do the following:

sudo apt-get upgrade
sudo apt-get install cmake
sudo apt-get install zlib1g-dev

After creating your environment (your-env-name), you can activate it as shown below:

source activate your-env-name

And when you need to deactivate it:

source deactivate

Training Your Model

With the virtual environment ready, let’s proceed to training:

  • Make sure that you are using RLGCJavaServer version 0.80 or newer.
  • Activate your virtual environment:
  • source activate your-env-name
  • Change directory to examples and run the training script:
  • cd RLGC/examples/IEEE39_load_shedding
    python trainIEEE39LoadSheddingAgent_discrete_action.py
  • You can monitor the training logs as they appear on your screen.

Check Training Results and Test Your Model

The project includes two Jupyter notebooks tailored for assessing training results and testing the trained RL model.

Customize Your Grid Environment

If you’re keen on customizing or developing a new grid environment for RL training, here’s how to do it:

In the trainIEEE39LoadSheddingAgent_discrete_action.py file, you will find the code responsible for defining the grid cases and configuration files. This is how it looks:

case_files_array = []
case_files_array.append(repo_path + "testData/IEEE39bus_multiloads_xfmr4_smallX_v30.raw")
case_files_array.append(repo_path + "testData/IEEE39bus_3AC.dyr")
# configuration files for dynamic simulation and RL
dyn_config_file = repo_path + "testData/IEEE39_dyn_config.json"
rl_config_file = repo_path + "testData/IEEE39_RL_loadShedding_3motor_2levels.json"
env = PowerDynSimEnv(case_files_array, dyn_config_file, rl_config_file, jar_path, java_port)

This snippet showcases how to specify the necessary files for dynamic simulation and RL training. You can create your environment based on these examples, ensuring seamless integration with OpenAI gym environments and stable baselines algorithms.

Troubleshooting and Further Assistance

If you experience issues or observe bugs, feel free to open an issue in the repository. For specific queries, reach out to Qiuhua Huang: qiuhua DOT huang AT pnnl DOT gov.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox