Welcome to your guide on GPTRouter, the solution to effortlessly manage multiple large language models (LLMs) and image models. With GPTRouter, you can not only speed up response times but ensure reliability that never ceases. Let’s delve deep into what’s possible!
Why GPTRouter?
After three years of working in the landscape of large language models, Writesonic addressed critical challenges. Here’s what we realized:
Solving Real-World Challenges:
- Model Independence: Avoid dependency on a single AI model. GPTRouter allows switching between models; if one goes down, your service remains active without a hitch.
- Beat the Latency: Eliminate slow responses. GPTRouter is built to handle latency, ensuring interactions with models like GPT-4 are smooth and quick.
- Diverse Model Integration: Incorporate multiple language and image models. Having fallback options makes your system resilient and versatile.
Key Features
- Universal API: Connect all your models effortlessly with just one API.
- Smart Fallbacks: Services remain uninterrupted by automatically switching to alternatives if needed.
- Automatic Retries: GPTRouter retries failed requests intelligently, cutting down on manual interventions.
- Fast and Responsive: Experience lightning-fast interactions with reduced latency.
Supported Models
GPTRouter supports an array of models, ensuring you have what you need, when you need it.
Supported Models Completion Streaming Async Completion Async Streaming
OpenAI :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Azure OpenAI :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Anthropic :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Replicate :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Stable Diffusion :white_check_mark: :heavy_exclamation_mark: :white_check_mark: :heavy_exclamation_mark:
Dalle-3 :white_check_mark: :heavy_exclamation_mark: :white_check_mark: :heavy_exclamation_mark:
Cohere :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
More to come :clock930: :clock930: :clock930: :clock930:
:heavy_exclamation_mark: Streaming not applicable to Image Models:clock930: Coming Soon:sparkles:
Quick Start
Setting up GPTRouter is as easy as pie! Here’s how to get things rolling:
Prerequisites
- Getting The Server Running: Follow the steps here to run the GPTRouter server locally, or use our Preview Deployment at this link. Obtain your API key by filling out the form here.
- Once the server is active, integrate GPTRouter into your application using our Python SDK or refer to the API documentation here.
Using the Python SDK
Install GPTRouter using pip or conda:
pip install gptrouter
conda install gptrouter -c conda-forge
Here’s an example to help you get started:
from gpt_router.client import GPTRouterClient
from gpt_router.models import ModelGenerationRequest, GenerationParams
from gpt_router.enums import ModelsEnum, ProvidersEnum
client = GPTRouterClient(base_url='your_base_url', api_key='your_api_key')
messages = [{'role': 'user', 'content': 'Write me a short poem'}]
prompt_params = GenerationParams(messages=messages)
claude2_request = ModelGenerationRequest(
model_name=ModelsEnum.CLAUDE_INSTANT_12,
provider_name=ProvidersEnum.ANTHROPIC.value,
order=1,
prompt_params=prompt_params,
)
response = client.generate(ordered_generation_requests=[claude2_request])
print(response.choices[0].text)
Discover more about streaming and various examples here.
Troubleshooting Ideas & Instructions
Encountering any hiccups? Here are some troubleshooting steps:
- Ensure your API key is valid and you’re using the correct base URL.
- Test your internet connection to eliminate connectivity issues.
- Check the GPTRouter service status to confirm it’s operational.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
On the Horizon
We’re continually enhancing GPTRouter, with upcoming integrations with Langchain and LlamaIndex that will broaden your options even further.
Conclusion
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.