How to Use MediaTek Research Breeze-7B-FC-v1_0: A Comprehensive Guide

Oct 28, 2024 | Educational

MediaTek Research Breeze-7B-FC is an advanced language model designed to enhance your AI capabilities. In this article, we’ll guide you through how to implement, utilize, and troubleshoot this powerful model.

Understanding Breeze-7B-FC

Before we dive into the “how-to,” let’s explore what makes Breeze-7B-FC special. Imagine you have a versatile toolbox, and each tool in it represents a unique capability of the Breeze-7B-FC model. Just as a skilled craftsman uses different tools for different tasks, Breeze-7B-FC employs function calling to handle a wider variety of language tasks efficiently.

Getting Started

Here is how you can set up and use the Breeze-7B-FC model:

1. Demo with Kaggle Kernel

2. Install Dependencies

  • Install the required package using pip:
  • pip install mtkresearch

3. Hosting the Model with VLLM

Use the following code to host the model:

from vllm import LLM, SamplingParams

llm = LLM(
    model='MediaTek-Research/Breeze-7B-FC-v1_0',
    tensor_parallel_size=num_gpu,  # number of GPUs
    gpu_memory_utilization=0.7,
    dtype='half'
)

turn_end_token_id = 61876 
params = SamplingParams(
    temperature=0.01,
    top_p=0.01,
    max_tokens=4096,
    repetition_penalty=1.1,
    stop_token_ids=[turn_end_token_id]
)

def _inference(prompt, llm, params):
    return llm.generate(prompt, params)[0].outputs[0].text

4. Instruction Following

To implement instruction following, run the following code:

from mtkresearch.llm.prompt import MRPromptV2

sys_prompt = ("You are a helpful AI assistant built by MediaTek Research. "
              "The user you are helping speaks Traditional Chinese and comes from Taiwan.")
prompt_engine = MRPromptV2()
conversations = [
    {"role": "system", "content": sys_prompt},
    {"role": "user", "content": ""}
]
prompt = prompt_engine.get_prompt(conversations)
output_str = _inference(prompt, llm, params)
result = prompt_engine.parse_generated_str(output_str)
print(result)

5. Function Calling

For function calling, utilize this snippet:

import json
from mtkresearch.llm.prompt import MRPromptV2

functions = [{
    "name": "get_current_weather",
    "description": "Get the current weather in a given location.",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The city and state, e.g., San Francisco, CA"
            },
            "unit": {
                "type": "string",
                "enum": ["celsius", "fahrenheit"]
            }
        },
        "required": ["location"]
    }
}]

def fake_get_current_weather(location, unit=None):
    return {"temperature": 30}

mapping = {
    "get_current_weather": fake_get_current_weather
}
prompt_engine = MRPromptV2()

# Stage 1: Query
conversations = [{"role": "user", "content": ""}]
prompt = prompt_engine.get_prompt(conversations, functions=functions)
output_str = _inference(prompt, llm, params)
result = prompt_engine.parse_generated_str(output_str)
print(result)

# Stage 2: Execute called functions
conversations.append(result)
tool_call = result["tool_calls"][0]
func_name = tool_call["function"]["name"]
func = mapping[func_name]
arguments = json.loads(tool_call["function"]["arguments"])
called_result = func(**arguments)

# Stage 3: Put executed results
conversations.append({
    "role": "tool",
    "tool_call_id": tool_call["id"],
    "name": func_name,
    "content": json.dumps(called_result)
})
prompt = prompt_engine.get_prompt(conversations, functions=functions)
output_str2 = _inference(prompt, llm, params)
result2 = prompt_engine.parse_generated_str(output_str2)
print(result2)

Troubleshooting

If you encounter issues while utilizing the Breeze-7B-FC model, consider these troubleshooting tips:

  • Ensure your Python environment has all dependencies installed.
  • Check whether the correct model path is provided.
  • If you face memory issues, try reducing gpu_memory_utilization.
  • Make sure the function definitions are accurate and match the intended calls.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox