Ever wanted your AI to evolve and adapt, much like nature? With Evolution Strategies (ES), you can optimize your models using principles derived from evolutionary biology. This guide will walk you through the steps to implement Evolution Strategy using Evostra, complete with troubleshooting advice!
Installation of Evostra
Evostra is compatible with both Python 2 and Python 3. You have multiple options for installation:
- Install from source:
$ python setup.py install - Install latest version from GitHub using pip:
$ pip install git+https://github.com/alirezamikaevostra.git - Install from PyPI:
$ pip install evostra(You may need to use
python3orpip3for Python 3)
Sample Usages
Experiment with Evostra by using examples such as:
- An AI agent learning to play Flappy Bird: Flappy Bird ES
- An AI agent learning to walk: Bipedal ES
How to Use Evostra
To effectively utilize the EvolutionStrategy module, you need to prepare input weights that are a list of arrays (one for each layer of your neural network). You can use any framework to build your model and pass its weights to ES. In this case, we will use Evostra’s built-in model, FeedForwardNetwork.
Building the FeedForward Network
import numpy as np
from evostra import EvolutionStrategy
from evostra.models import FeedForwardNetwork
# A feed forward neural network with input size of 5, two hidden layers of size 4, and output of size 3
model = FeedForwardNetwork(layer_sizes=[5, 4, 4, 3])
Now, let’s define our get_reward function, which determines how well our model is performing:
solution = np.array([0.1, -0.4, 0.5])
inp = np.asarray([1, 2, 3, 4, 5])
def get_reward(weights):
global solution, model, inp
model.set_weights(weights)
prediction = model.predict(inp)
# Our best reward is zero
reward = -np.sum(np.square(solution - prediction))
return reward
Running the Evolution Strategy
Next, you will create the Evolution Strategy object and execute it for a specified number of iterations:
# If your task is computationally expensive, use num_threads=-1 to utilize all available CPU cores;
# Here we use 1 process since the task isn't computationally intensive.
es = EvolutionStrategy(model.get_weights(), get_reward, population_size=20, sigma=0.1, learning_rate=0.03, decay=0.995, num_threads=1)
es.run(1000, print_step=100)
Upon completing the run, observe the output indicating the rewards achieved at every 100 iterations. This process essentially fine-tunes your model to improve its performance iteratively.
Understanding the Process Through Analogy
Think of Evolution Strategies as a game of survival. Imagine a vibrant jungle full of creatures (your AI models) that must adapt to survive challenges (the tasks you set). At each iteration, some creatures are better suited to their environment (have better rewards). The strongest creatures reproduce (new weights are generated based on the best performers), allowing their offspring to inherit advantageous traits (optimized weights). Over many generations, the lineage evolves into a stronger variety, ultimately adapting to the evolving environment, much like how your model gets better over time!
Troubleshooting
If you run into issues during installation or execution, here are some tips:
- Ensure that your Python and pip versions are compatible. Consider using
python3andpip3if needed. - Check the output logs for errors related to dependencies, and ensure required libraries are installed.
- If the model does not perform as expected, try adjusting the parameters like
population_sizeorlearning_ratefor better results. - For deeper insights and collaborative opportunities in AI development projects, stay connected with fxis.ai.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

