AIaaS (AI as a Service) for everyone. Create AI projects and consume them using a simple REST API.
Demo:
Demo Link
Username: codedemocode
Password: codedemocode
Features
- Projects: There are multiple types of agents (projects), each with its own features. (rag, ragsql, inference, vision, router, agent)
- Users: A user represents a user of the system. It’s used for authentication and authorization (basic auth). Each user may have access to multiple projects.
- LLMs: Supports any public LLM supported by LlamaIndex. This includes any local LLM supported by Ollama, LiteLLM, etc.
- VRAM: Automatic VRAM management. RestAI will manage the VRAM usage, automatically loading and unloading models as needed and requested.
- API: The API is a first-class citizen of RestAI. All endpoints are documented using Swagger.
- Frontend: There is a frontend available at restai-frontend.
Project Types
RAG
- Embeddings: You may use any embeddings model supported by LlamaIndex. Check embeddings definition.
- Vectorstore: There are two vectorstores supported: Chroma and Redis.
- Retrieval: Features an embeddings search and score evaluator, allowing you to evaluate the quality of your embeddings and simulate the RAG process before the LLM. Reranking is also supported, using ColBERT and LLM based techniques.
- Loaders: You may use any loader supported by LlamaIndex.
- Sandboxed mode: RAG agents (projects) have a sandboxed mode, providing a locked default answer when embeddings for the given question are unavailable. This is particularly useful for chatbots.
- Evaluation: You may evaluate your RAG agent using deepeval with the eval property in the RAG endpoint.
RAGSQL
- Connection: Supply a MySQL or PostgreSQL connection string, and it will automatically crawl the DB schema, using table and column names to translate questions into SQL for responses.
Agent
- ReAct Agents: Specify which tools to use in the project, enabling the agent to figure out how to achieve the objective.
- Tools: Supply all the tools names you want the Agent to use in this project (separated by commas).
- New tools can be easily added by creating a new tool in the
appllmstoolsfolder, which will be automatically picked up by RestAI.
Inference
Vision
- text2img: RestAI supports local Stable Diffusion and Dall-E, featuring prompt boosting to enhance both details and quality.
- img2text: RestAI supports LLaVA and BakLLaVA by default.
- img2img: RestAI supports InstantID.

Router
The router directs a message to the most suitable project, proving crucial when you have multiple projects and want to route the inquiry to an appropriate specific one.
LLMs
You may use any LLM supported by Ollama and/or LlamaIndex.
Installation
To install RestAI, ensure that you have Poetry for dependency management:
- Install it by running:
pip install poetry
Development
For development, follow these commands:
make installmake dev(starts RestAI in development mode)
Production
make installmake start
Docker
For Docker setup, edit the .env file accordingly:
docker compose --env-file .env up --build
API
All API endpoints are documented at: Endpoints
For Swagger/OpenAPI documentation: Swagger
Frontend
The source code is available at RestAI Frontend. You can install the frontend automatically with:
make install
Tests
Tests are implemented using pytest. You can run them with:
make test
Troubleshooting
If you encounter issues during installation or while using RestAI, here are some common troubleshooting ideas:
- Ensure you have the correct version of Python installed.
- Make sure all dependencies are correctly installed with Poetry. Try running
poetry installagain. - If using Docker, confirm that your Docker daemon is running and your .env file configurations are correct.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

