Welcome to the world of AICI, a cutting-edge tool designed to help you build and manage Controllers that shape the output of Large Language Models (LLMs) in real time. AICI allows you to create efficient, flexible, and secure controllers that enhance the capabilities of language generation technologies. In this article, we’ll walk you through how to get started with AICI, from setting up your development environment to controlling AI output like a maestro conducting a symphony.
QuickStart: Example Walkthrough
In this quickstart guide, you’ll learn to:
- Set up the rLLM Server and AICI Runtime.
- Build and deploy a Controller.
- Utilize AICI to custom-tailor LLM output with specific rules.
Step 1: Development Environment Setup
To set up AICI, you’ll need to prepare your development environment. Follow these steps:
System Requirements
Make sure you have:
- Rust development environment
- Python 3.11 or later for creating controllers
Installation
Depending on your operating system (Windows, macOS, or Linux), you’ll need to install specific tools:
- Windows: Use WSL2 or set up the included devcontainer.
- macOS: Install XCode command line tools.
- Linux: Install git, cmake, ccache, and others using your package manager.
Once you have the necessary tools, install Rust and the wasm32-wasi Rust component:
curl --proto =https --tlsv1.2 -sSf https://sh.rustup.rs | sh
Step 2: Build and Start rLLM Server and AICI Runtime
To build and start the rLLM server, execute the following commands:
cd rllm/rllm-llamacpp
.server.sh phi2
For a complete status check, navigate to localhost and confirm that your desired model is loaded.
Step 3: Control AI Output using AICI Controllers
AICI allows you to write Controllers that guide how LLMs generate text. Think of a Controller as a traffic director that meticulously controls the flow of information, ensuring everything is directed smoothly and precisely.
Let’s use an example Python script, list-of-five.py, to control the AI output:
import pyaici.server as aici
async def main():
prompt = "What are the most popular types of vehicles?\n"
await aici.FixedTokens(prompt)
marker = aici.Label()
for i in range(1, 6):
await aici.FixedTokens(f"{i}.")
await aici.gen_text(stop_at="\n")
await aici.FixedTokens("\n")
aici.set_var(result, marker.text_since())
aici.start(main())
In the script, we control the generation flow, asking the model for only five popular vehicle types. Instead of crafting a convoluted request, we simplify the process through code!
Troubleshooting Tips
Should you encounter issues during your setup or execution, consider the following tips:
- Ensure all dependencies are installed correctly.
- Check if the correct model is running via the HTTP interface.
- Observe your firewall settings if you’re having network issues.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Comprehensive Guide: Exploring Further
Dive deeper into the capabilities of AICI with resources available in the repository. The flexibility of AICI supports extensive use cases in controlling LLM outputs and enhancing integration efficiency. Explore the reference engines or develop a new controller to tailor the LLM behavior to your desired outputs.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.
Conclusion
The AICI empowers developers to exert precise control over AI-generated content. By following this guide, you’re equipped to explore the endless possibilities of AI and contribute to creating a refined and responsive language model environment. Happy coding!

