How to Get Started with JARVIS: Your Guide to Exploring Artificial General Intelligence

Jan 9, 2021 | Data Science

Are you ready to dive into the world of Artificial General Intelligence (AGI) with JARVIS? This innovative framework is designed to bring cutting-edge AI research to the community in a user-friendly manner. In this guide, we will unravel how to set it up, use its features, and troubleshoot common issues.

What’s New with JARVIS?

  • 2024.01.15: Introduction of EasyTool for simplified tool usage. Check it out here.
  • 2023.11.30: Launch of TaskBench to evaluate task automation capabilities of LLMs. Learn more here.
  • 2023.07.28: Ongoing evaluation and project rebuilding; a new version is on the horizon.
  • 2023.04.06: Gradio demo and web API for server mode launched.
  • 2023.04.01: Improved version of building code released.

Getting Started with JARVIS

Ready to harness the power of JARVIS? Follow these steps to jumpstart your experience.

System Requirements

Default (Recommended)

  • Operating System: Ubuntu 16.04 LTS
  • VRAM: 24GB
  • RAM: 12GB (minimal), 16GB (standard), 80GB (full)
  • Disk Space: 284GB plus additional requirements for specific models

Minimum (Lite)

  • Operating System: Ubuntu 16.04 LTS
  • Note: Does not require expert models to be downloaded locally

Step-by-Step Installation

To set up JARVIS, do the following:

  • Replace openai.key and huggingface.token in serverconfigs/config.default.yaml with your personal keys.
  • Run the following commands in your terminal:
bash
# setup environment
cd server
conda create -n jarvis python=3.8
conda activate jarvis
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
pip install -r requirements.txt
# download models
cd models
bash download.sh
# run server
cd ..
python models_server.py --config configs/config.default.yaml
python awesome_chat.py --config configs/config.default.yaml --mode server

Now you can access JARVIS services via the Web API!

Understanding Its Workflow: An Analogy

Think of JARVIS as the conductor of a grand orchestra. The conductor (an LLM, or Large Language Model) operates with various musicians (expert models) who perform specific instruments (tasks). Here’s how it works:

  • Task Planning: The conductor analyzes a musical score (user requests) to understand the overall piece and decide how to break it down into manageable sections (tasks).
  • Model Selection: The conductor selects which musicians (expert models) to play, based on their skillset and the demands of the score.
  • Task Execution: Each musician performs their part, playing the notes they’ve been assigned (executing selected tasks).
  • Response Generation: Finally, the conductor blends all the sounds together to produce a harmonious performance (integrating results and generating responses).

Troubleshooting Common Issues

If you encounter any bumps along the way, here are some troubleshooting tips:

  • Check if all necessary dependencies are correctly installed.
  • Make sure your API keys are valid and configured properly.
  • If a model fails to load, verify if your hardware meets the recommended requirements.
  • Restart the server and try running JARVIS again.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Happy exploring!

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox