The Functionary Medium V2.4 is an impressive language model that excels at interpreting and executing functions or tools in a smart way. It knows when to trigger functions, whether to execute them in parallel with others or one after the other, and, importantly, it understands their outputs well. With this guide, you will learn how to use this model effectively!
Key Features of Functionary Medium V2.4
- Intelligent Parallel Tool Use: Executes multiple functions efficiently.
- Grounded Outputs: Provides responses based on the results of function outputs.
- Contextual Function Use: Decides when to utilize functions or just provide chat responses.
- Open-source Alternative: A solid open-source option compared to GPT-4.
- Code Interpreter Support: Excellent support for evaluating and running code.
Performance Metrics
This model has demonstrated state-of-the-art performance in Function Calling Accuracy using the SGD dataset. Here’s a quick comparison:
Dataset Model Name Function Calling Accuracy (Name & Arguments)
:------------- :------------------- ---------------------------
SGD MeetKai-functionary-small-v2.4 83.07
SGD MeetKai-functionary-medium-v2.4 **88.11**
SGD OpenAI-gpt-3.5-turbo-1106 71.64
SGD OpenAI-gpt-4-1106-preview 76.29
Understanding the Code with an Analogy
Imagine you are a chef in a restaurant kitchen, where every ingredient represents a function. The customers (input queries) request their favorite dishes (outputs). The Functionary Medium v2.4 acts like a head chef who decides when to use each ingredient based on the customer’s request. Just like a head chef doesn’t throw all ingredients into a dish at once, this model smartly chooses which function (ingredient) to use and when, whether utilizing them simultaneously for a banquet or one at a time for a fine dining experience.
Running the Model
You can start using the Functionary Medium V2.4 through an OpenAI-compatible vLLM server. Here’s a simple example of how the process works in Python:
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary")
response = client.chat.completions.create(
model="path/to/functionary/model",
messages=[{"role": "user", "content": "What is the weather for Istanbul?"}],
tools=[
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
}
}
],
tool_choice="auto"
)
Troubleshooting Tips
If you encounter any issues while using the Functionary Medium V2.4, consider the following:
- Ensure you have correctly set up the vLLM server and that the model path is accurate.
- Verify that the JSON structure for the functions you wish to use matches the required schema.
- Check your API key and server URL to confirm they are correct.
- To gain further insights or collaborate on AI projects, stay connected with fxis.ai.
Conclusion
With its advanced function interpretation capabilities, the Functionary Medium V2.4 is an exciting addition to the space of AI development. Whether you’re handling simple queries or complex command structures, this model offers a comprehensive solution. At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

