Your fully proficient, AI-powered and local chatbot assistant
Flowchart for everything-ai
Quickstart
-
Clone this repository
git clone https://github.com/AstraBert/everything-ai.git cd everything-ai
-
Set your .env file
Modify the following variables in your
.env
file:- VOLUME: mounts your local file system into the Docker container.
- MODELS_PATH: specifies where
llama.cpp
can find the GGUF models you downloaded. - MODEL: indicates which model to use (
gguf
file name with extension). - MAX_TOKENS: tells
llama.cpp
the maximum number of tokens it can generate as output.
Example of a
.env
file:VOLUME=c:/Users/User:User MODELS_PATH=c:/Users/User/.cache/llama.cpp MODEL=stories260K.gguf MAX_TOKENS=512
-
Pull the necessary images
docker pull astrabert/everything-ai:latest docker pull qdrant/qdrant:latest docker pull ghcr.io/ggerganov/llama.cpp:server
-
Run the multi-container app
docker compose up
-
Go to localhost:8670 and choose your assistant
You will see a task choice interface like the following:
Choose among various tasks such as:
- retrieval-text-generation: Use Qdrant backend for building a retrieval-friendly knowledge base.
- agnostic-text-generation: ChatGPT-like text generation.
- text-summarization: Summarize text and PDFs.
- image-generation: Generate images using stable diffusion.
- audio-classification: Classify audio files.
- video-generation: Generate video based on text prompts.
And many more tasks tailored to your needs.
-
Start using your assistant
Once everything is ready, navigate to
localhost:7860
to start your assistant:
Understanding the Components
Imagine your AI chatbot as a highly organized library. The process begins when you clone the repository, which is like constructing the library building itself. Next, you set up your .env
file, akin to categorizing the books—defining where each genre (or model) is located and how many books (tokens) can be borrowed at a time.
Pulling the necessary images is like stocking the shelves with books. Finally, running the multi-container app brings the library to life, allowing users (you!) to access the knowledge (AI responses) stored within, simply by navigating to the designated URLs (like finding the right aisle in the library).
Troubleshooting Ideas
If you encounter issues during the setup, consider the following steps:
- Ensure Docker is properly installed and running.
- Double-check the paths specified in your
.env
file for accuracy. - Confirm that all necessary images have pulled without errors.
- If your assistant doesn’t respond, make sure the services are still running; restart if necessary.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.