Getting Started with AI00 RWKV Server

Category :

Welcome to the world of AI00 RWKV Server, a powerful inference API server for the RWKV language model! In this article, we will guide you through the setup and usage of this amazing tool, ensuring you embark on your AI journey effortlessly.

What is AI00 RWKV Server?

The AI00 RWKV Server is an inference API server designed for the RWKV language model. It utilizes the web-rwkv inference engine, and the standout feature is its support for Vulkan, enabling it to run efficiently on various graphics processing units (GPUs), including AMD cards and integrated graphics – no Nvidia required!

Features of AI00 RWKV Server

  • High performance and accuracy based on the RWKV model.
  • Supports Vulkan inference acceleration.
  • Compact and ready-to-use without the need for bulky CUDA or Pytorch installations.
  • Compatible with OpenAI’s ChatGPT API interface.

Installation and Usage

Follow the steps outlined below to get the AI00 RWKV Server up and running smoothly on your machine.

Step 1: Download Pre-built Executables

  1. Download the latest version from the Release section.
  2. After downloading the model, place it in the assets/models path. For example: assets/models/RWKV-x060-World-3B-v2-20240228-ctx4096.st.
  3. Modify the assets/Config.toml file for model configurations such as model path and quantization layers.
  4. Run the server in the command line by executing:
  5. bash
    $ .ai00_rwkv_server
    
  6. Open your web browser and visit the WebUI at http://localhost:65530 (or https://localhost:65530 if TLS is enabled).

Step 2: (Optional) Build from Source

  1. Install Rust from the official site.
  2. Clone the repository with:
  3. bash
    $ git clone https://github.com/cgisky1980/ai00_rwkv_server.git
    $ cd ai00_rwkv_server
    
  4. Download the model and place it in the assets/models path as mentioned earlier.
  5. Compile the project:
  6. bash
    $ cargo build --release
    
  7. Run the compiled server:
  8. bash
    $ cargo run --release
    
  9. Finally, access the WebUI at http://localhost:65530.

Model Conversion

If your model is saved in the .pth format, you’ll need to convert it to .st format:

  1. Download the .pth model from HuggingFace.
  2. Run the Python script convert2ai00.py or convert_safetensors.py:
  3. bash
    $ python .convert2ai00.py --input pathtomodel.pth --output pathtomodel.st
    
  4. If not using Python, find the executable called converter in the Release section and run:
  5. bash
    $ .converter --input pathtomodel.pth --output pathtomodel.st
    
  6. Place the .st model in assets/models and update the assets/Config.toml accordingly.

Using AI00 RWKV Server APIs

Once set up, you can interact with the server using its available APIs. The server starts at port 65530, following the OpenAI API specification. You can check the API documentation at http://localhost:65530/api-docs.

Python Example for API Invocation

Here’s how to invoke the API in Python:

python
import openai

class Ai00:
    def __init__(self, model=model, port=65530, api_key=JUSTSECRET_KEY):
        openai.api_base = f'http://127.0.0.1:{port}/api'
        openai.api_key = api_key
        self.ctx = []
        self.params = {
            'system_name': 'System',
            'user_name': 'User',
            'assistant_name': 'Assistant',
            'model': model,
            'max_tokens': 4096,
            'top_p': 0.6,
            'temperature': 1,
            'presence_penalty': 0.3,
            'frequency_penalty': 0.3,
            'half_life': 400,
            'stop': ['x00','nn']
        }
    def set_params(self, **kwargs):
        self.params.update(kwargs)
    ...
ai00 = Ai00()
ai00.set_params(max_tokens=4096, top_p=0.55, temperature=2, ...)
print(ai00.send_message("how are you?"))

In this example, you are creating an instance of the Ai00 class, setting the model parameters, and sending messages to the server.

Keep Your Code Organized – The Analogy

Think of setting up the AI00 RWKV Server as arranging a bookshelf. You need to gather your books (the necessary files), organize them in the right sections (model paths), and once they’re ready, you can invite friends over to enjoy the collection (start using the server). If you were to sort your collection randomly, finding a specific book later would be a headache, just like a poorly configured server. By following steps methodically, your library of AI knowledge will be as inviting as a neatly arranged bookshelf!

Troubleshooting Tips

If you encounter issues during installation or while running the server, consider the following troubleshooting tips:

  • Ensure all dependencies are correctly installed and up to date.
  • Check if the model files are placed correctly in the specified paths.
  • Verify that your firewall settings allow access to port 65530.
  • If utilizing a specific GPU, ensure that Vulkan support is enabled.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Ready to Dive Into AI?

With AI00 RWKV Server, you’re equipped to leverage the power of language models for various applications. Let’s pave the way for the future of AI together!

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox

Latest Insights

© 2024 All Rights Reserved

×