If you’re looking to create a challenge on EvalAI, you’ve come to the right place! This guide will walk you through the essential steps in a user-friendly manner. Let’s dive in!
Directory Structure Breakdown
First, it’s important to understand the directory structure you’ll be working with. Think of this structure as the blueprint for an architecture project; each folder and file plays a specific role in the overall construction of your challenge:
- README.md annotations: Contains annotations for dataset splits.
- challenge_data: Scripts to test the evaluation locally.
- challenge_1: Includes the evaluation script for the challenge.
- challenge_config.yaml: This is your configuration file defining the setup.
- evaluation_script: Holds the core evaluation methods.
- templates: Contains HTML templates for various challenge aspects.
- worker: Scripts to facilitate local testing of the evaluation script.
Creating a Challenge Using GitHub
To kick off your challenge creation using GitHub, follow these steps:
- Use this repository as a template.
- Generate your GitHub personal access token and copy it.
- Add the access token to your forked repository’s secrets under the name AUTH_TOKEN.
- Go to EvalAI and fetch the necessary details, such as your auth token and host team ID.
- Create a branch named “challenge” in your forked repository.
- Update your
githubhost_config.jsonwith the fetched tokens. - Read the EvalAI challenge creation documentation to structure your challenge appropriately.
- Commit the changes and push the branch to see the results.
- If errors arise in your config, an issue will be triggered in your repository.
- Once approved, find your challenge under Hosted Challenges.
Creating a Challenge Using Configuration
If you prefer to create your challenge using a config approach, here’s how:
- Fork the repository.
- Read the EvalAI challenge creation documentation thoroughly.
- After making necessary changes, run the command
./run.sh. - Upload the
challenge_config.zipto EvalAI. - To update, simply use the UI on EvalAI.
Testing Your Evaluation Script Locally
Before going live, you can test your evaluation script locally. Here’s your action plan:
- Copy relevant files from the evaluation_script directory to challenge_data/challenge_1.
- Edit
worker/run.pyto align with your challenge phase and filenames. - Run the command
python -m worker.run. If successful, your script is good to go!
Troubleshooting Your Challenge Creation Process
If you encounter issues during the challenge creation, don’t hesitate to seek help! You can open issues on our GitHub Repository or contact the team via email. For ongoing insights, updates, or collaboration opportunities regarding AI development projects, stay in touch with fxis.ai.
Conclusion
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.
