How to Get Started with the Functionary Model

Category :

Welcome to your friendly guide on diving into the Functionary model, a remarkable language model designed to execute functions and plugins with precision. Think of it as a skilled assistant that knows when to fetch information and when to keep the conversation going without triggering unnecessary actions.

Understanding Functionary: An Analogy

Imagine you have a personal assistant who can multitask—making coffee, booking appointments, and even checking the weather—all while participating in your chat. This is what Functionary does with coding functions. Just as your assistant decides which task to execute based on your needs, Functionary intelligently analyzes when to use functions and when a simple chat response will suffice.

Key Features at a Glance

  • Intelligent parallel tool use
  • Analyzes outputs from functions to provide relevant responses
  • Decides when to refrain from using tools
  • One of the best open-source alternatives to GPT-4
  • Supports a code interpreter

How to Get Started

To begin using the Functionary model, follow these steps:

Step 1: Setting Up Your Environment

You need to install the required libraries and ensure you have the right model:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-small-v2.5", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-small-v2.5", device_map="auto", trust_remote_code=True)

Step 2: Defining Your Functions

Next, set up the function you want to use, much like defining a recipe for a dish you want to cook:

tools = [{
    "type": "function",
    "function": {
        "name": "get_current_weather",
        "description": "Get the current weather",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and state, e.g. San Francisco, CA"
                }
            },
            "required": ["location"]
        }
    }
}]

Step 3: Crafting Your Messages

Prepare your query to the model by setting up the messages:

messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}]
final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False)
tokenizer.padding_side = "left"

Step 4: Running the Model

Now that everything is set, you can execute the model and get your output:

inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda")
pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer)
print(tokenizer.decode(pred.cpu()[0]))

Additional Prompt Template

You can also create a prompt template for more complex inquiries:

from openai import OpenAI

client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary")
client.chat.completions.create(
    model="path/to/functionary/model/",
    messages=[{"role": "user", "content": "What is the weather for Istanbul?"}],
    tools=[{
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA"
                    }
                },
                "required": ["location"]
            }
        }
    }],
    tool_choice="auto"
)

Troubleshooting Tips

As with any advanced technology, you might face some hiccups. Here are some potential troubleshooting ideas:

  • Ensure all packages are up to date. Sometimes, compatibility issues arise from outdated libraries.
  • If you’re experiencing performance issues, check your device’s resources. Ensure it has enough memory and processing capacity.
  • Examine function definitions closely. Make sure each parameter is defined as expected.
  • In case of any unexpected outputs, refer to the detailed documentation available here.
  • For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Now, go ahead and unleash the power of Functionary in your projects. Happy coding!

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox

Latest Insights

© 2024 All Rights Reserved

×