How to Build and Use the Quyen Language Model

Feb 28, 2024 | Educational

Welcome to our comprehensive guide on how to build and effectively utilize the Quyen series of language models! The Quyen models are part of the Qwen1.5 family and cover a range from smaller models like Quyen-SE (0.5B) to the powerful Quyen-Pro-Max (72B). In this article, we will cover the various model versions, prompt templates, and practical steps to get you started.

Model Versions Explained

Quyen models come in six different versions, each tailored for specific needs and computational requirements. Here’s a quick overview:

  • Quyen-SE (0.5B)
  • Quyen-Mini (1.8B)
  • Quyen (4B)
  • Quyen-Plus (7B)
  • Quyen-Pro (14B)
  • Quyen-Pro-Max (72B)

Each model varies in capability, with larger models generally providing more nuanced and detailed outputs.

Training Data & Datasets

The Quyen models were trained using several notable datasets, including:

This selection of curated datasets ensures your models are well-rounded and capable of understanding various prompts effectively.

Using Prompt Templates

All Quyen models utilize a prompt template known as ChatML. Here’s a quick breakdown of a sample prompt:

im_startsystem
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.
im_end
im_start
user
Hello world.
im_end

In this structure, the model is guided to understand its role and context effectively. You can also customize the input using the apply_chat_template function as shown in the example below:

python
messages = [
    {'role': 'system', 'content': 'You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.'},
    {'role': 'user', 'content': 'Hello world.'}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors='pt')
model.generate(**gen_input)

Benchmarks

Benchmarks for the Quyen models are currently in progress and will be updated as soon as available. Stay tuned for performance metrics!

Troubleshooting and Support

If you encounter any issues while using the Quyen models, consider the following troubleshooting tips:

  • Ensure that the datasets are correctly set up and accessible in your environment.
  • Check the system requirements for running larger models, as they can be quite demanding on resources.
  • Verify that the prompt format strictly adheres to the ChatML structure, as deviations may lead to unexpected behavior.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox