How to Use the Vistral-7B-Chat for Conversational AI

May 18, 2024 | Educational

If you’re looking to supercharge your conversational AI with the Vistral-7B-Chat model, you’ve arrived at the right place! This article will guide you step-by-step through the setup process, help you understand the code involved, and offer troubleshooting tips to ensure a smooth experience.

Getting Started with Vistral-7B-Chat

To start building your conversational agent, you’ll need a few essential packages. Make sure you have torch and transformers installed in your Python environment. Here’s how to set it up:

pip install torch transformers

Usage Instructions

We will be employing the default chat template format. Below is a detailed breakdown of the code you will need to run.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

system_prompt = "Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn."
system_prompt += "Câu trả lời của bạn không nên chứa bất kỳ nội dung gây hại, phân biệt chủng tộc, phân biệt giới tính, độc hại, nguy hiểm hoặc bất hợp pháp nào."
system_prompt += "Nếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao, thay vì trả lời một điều gì đó không chính xác."
system_prompt += "Nếu bạn không biết câu trả lời cho một câu hỏi, hãy trả lời là bạn không biết và vui lòng không chia sẻ thông tin sai lệch."

tokenizer = AutoTokenizer.from_pretrained("Viet-MistralVistral-7B-Chat")
model = AutoModelForCausalLM.from_pretrained(
    "Viet-MistralVistral-7B-Chat",
    torch_dtype=torch.bfloat16,  # Use torch.float16 for V100
    device_map='auto',
    use_cache=True,
)

conversation = [{"role": "system", "content": system_prompt}]
while True:
    human = input("Human: ")
    if human.lower() == "reset":
        conversation = [{"role": "system", "content": system_prompt}]
        print("The chat history has been cleared!")
        continue
    conversation.append({"role": "user", "content": human})
    input_ids = tokenizer.apply_chat_template(conversation, return_tensors="pt").to(model.device)

    out_ids = model.generate(
        input_ids=input_ids,
        max_new_tokens=768,
        do_sample=True,
        top_p=0.95,
        top_k=40,
        temperature=0.1,
        repetition_penalty=1.05,
    )
    assistant = tokenizer.batch_decode(out_ids[:, input_ids.size(1):], skip_special_tokens=True)[0].strip()
    print("Assistant: ", assistant)
    conversation.append({"role": "assistant", "content": assistant})

Code Explanation: An Analogy

Think of the setup and execution of the Vistral-7B-Chat as preparing a well-organized kitchen for a cooking show. Each ingredient and tool represents a piece of code that plays a vital role in your AI conversation:

  • Ingredients (system_prompt): Just like you need the right ingredients to make a dish, the system prompt sets the direction of your conversation, ensuring the AI behaves appropriately.
  • Cooking Equipment (tokenizer and model): These are your tools that process inputs (ingredients) and enable the AI to ‘cook’ up responses based on the context you’ve established.
  • Recipe Instructions (while loop and conversation logic): This part repeats the process, allowing you to ask questions (add ingredients) and receive responses (finished dishes) until you decide to reset and start fresh.

Troubleshooting Tips

If you encounter any issues while setting up or running the Vistral-7B-Chat, here are some troubleshooting ideas:

  • Installation Failures: Ensure that you have the correct versions of PyTorch and Transformers installed. You might need to upgrade or reinstall them.
  • Model Not Loading: Double-check that the model name in the code matches the one available on the Hugging Face model hub.
  • Unexpected Outputs: If the responses don’t seem appropriate, revisit your system_prompt to ensure clarity and context.
  • Reset Functionality: If the chat history isn’t clearing, verify that the input condition accurately captures any reset commands.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

That’s it! You’re now equipped to kickstart conversational interactions using Vistral-7B-Chat. Remember that refining your prompts and supervising the conversations will enhance the quality and relevance of the AI responses over time.

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox