How to Create a Challenge on EvalAI

Sep 10, 2023 | Educational

If you’re looking to create a challenge on EvalAI, you’ve come to the right place! This guide will walk you through the essential steps in a user-friendly manner. Let’s dive in!

Directory Structure Breakdown

First, it’s important to understand the directory structure you’ll be working with. Think of this structure as the blueprint for an architecture project; each folder and file plays a specific role in the overall construction of your challenge:

  • README.md annotations: Contains annotations for dataset splits.
  • challenge_data: Scripts to test the evaluation locally.
  • challenge_1: Includes the evaluation script for the challenge.
  • challenge_config.yaml: This is your configuration file defining the setup.
  • evaluation_script: Holds the core evaluation methods.
  • templates: Contains HTML templates for various challenge aspects.
  • worker: Scripts to facilitate local testing of the evaluation script.

Creating a Challenge Using GitHub

To kick off your challenge creation using GitHub, follow these steps:

  1. Use this repository as a template.
  2. Generate your GitHub personal access token and copy it.
  3. Add the access token to your forked repository’s secrets under the name AUTH_TOKEN.
  4. Go to EvalAI and fetch the necessary details, such as your auth token and host team ID.
  5. Create a branch named “challenge” in your forked repository.
  6. Update your githubhost_config.json with the fetched tokens.
  7. Read the EvalAI challenge creation documentation to structure your challenge appropriately.
  8. Commit the changes and push the branch to see the results.
  9. If errors arise in your config, an issue will be triggered in your repository.
  10. Once approved, find your challenge under Hosted Challenges.

Creating a Challenge Using Configuration

If you prefer to create your challenge using a config approach, here’s how:

  1. Fork the repository.
  2. Read the EvalAI challenge creation documentation thoroughly.
  3. After making necessary changes, run the command ./run.sh.
  4. Upload the challenge_config.zip to EvalAI.
  5. To update, simply use the UI on EvalAI.

Testing Your Evaluation Script Locally

Before going live, you can test your evaluation script locally. Here’s your action plan:

  1. Copy relevant files from the evaluation_script directory to challenge_data/challenge_1.
  2. Edit worker/run.py to align with your challenge phase and filenames.
  3. Run the command python -m worker.run. If successful, your script is good to go!

Troubleshooting Your Challenge Creation Process

If you encounter issues during the challenge creation, don’t hesitate to seek help! You can open issues on our GitHub Repository or contact the team via email. For ongoing insights, updates, or collaboration opportunities regarding AI development projects, stay in touch with fxis.ai.

Conclusion

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox