The Commencis LLM is an exciting generative text model tailored for the Turkish banking sector. Built on the foundation of the Mistral 7B model, this tool is equipped to generate nuanced and contextually relevant content specifically for banking-related queries. In this article, we will guide you on how to effectively implement this model for your needs, ensuring a smooth experience.
Model Overview
Commencis LLM has been meticulously finetuned from the Mistral 7B model to cater specifically to the banking domain. It processes both general Turkish language nuances and specific banking terminologies. Let’s break it down in simpler terms:
- Developed by: Commencis
- Language: Turkish
- Finetuned from model: Mistral 7B
Training Insights
The training of this model involves two crucial phases: Supervised Fine-Tuning (SFT) and Reward Modeling with Reinforcement Learning from Human Feedback (RLHF). The model utilizes a diverse dataset ranging from banking dictionaries to domain-specific examples, ensuring it understands the intricacies of banking.
How to Set Up and Use Commencis LLM
Now, let’s step into the coding world. Think of the setup as preparing a special meal. You’re gathering ingredients and tools to create a dish specifically designed for what you need. Below is a guide to cooking up responses with Commencis LLM:
python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
class TextGenerationAssistant:
def __init__(self, model_id:str):
self.tokenizer = AutoTokenizer.from_pretrained(model_id)
self.model = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto', load_in_8bit=True, load_in_4bit=False)
self.pipe = pipeline('text-generation', model=self.model, tokenizer=self.tokenizer, device_map='auto', max_new_tokens=1024, return_full_text=True, repetition_penalty=1.0)
self.sampling_params = dict(do_sample=True, temperature=0.5, top_k=50, top_p=0.9)
self.system_prompt = "Sen yardımcı bir asistansın. Sana verilen talimat ve girdilere en uygun cevapları üreteceksin."
def format_prompt(self, user_input):
return ["INST"] + self.system_prompt + user_input + ["INST"]
def generate_response(self, user_query):
prompt = self.format_prompt(user_query)
outputs = self.pipe(prompt, **self.sampling_params)
return outputs[0]['generated_text'].split(["INST"])[1].strip()
assistant = TextGenerationAssistant(model_id='Commencis/Commencis-LLM')
user_query = "Faiz oranı yükseldiğinde kredi maliyetim nasıl etkilenir?"
response = assistant.generate_response(user_query)
print(response)
In this code:
- The class TextGenerationAssistant is created as your chef, handling the cooking process.
- It initializes the model and tokenizer, similar to gathering your ingredients.
- The generate_response method is where the magic happens – crafting the output based on your input!
Using the Chat Template
Once you’ve set up your model, you can engage in conversation effortlessly. The chat template is like having a pre-set recipe for various interactions. Here’s how to set it up:
python
from transformers import AutoTokenizer
import transformers
import torch
model = 'Commencis/Commencis-LLM'
messages = [{"role": "user", "content": "Faiz oranı yükseldiğinde kredi maliyetim nasıl etkilenir?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline('text-generation', model=model, torch_dtype=torch.float16, device_map='auto')
outputs = pipeline(prompt, max_new_tokens=1024, do_sample=True, temperature=0.5, top_k=50, top_p=0.9)
print(outputs[0]['generated_text'].split(["INST"])[1].strip())
Troubleshooting Common Issues
While using Commencis LLM, you may encounter some hiccups here and there. Here are a few troubleshooting ideas:
- Unexpected Outputs: If the model generates hallucinatory or incorrect responses, consider adjusting the temperature parameter to make the model more focused.
- Language Switching: If the model inadvertently switches languages, ensure your input is consistent in Turkish.
- Repeated Responses: To avoid replications, tweak the repetition_penalty parameter to introduce variation.
- Longer Processing Times: If your responses are slow, check your device’s compatibility and resource availability.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Conclusion
Remember, Commencis LLM, like all models, has its limitations, including potential inaccuracies and biases. Familiarity with these aspects can improve your experience using the model.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

