An instruction-tuned Llama-2 biased towards fiction writing and conversation.
Model Details
The long-awaited release of our new models based on Llama-2 is finally here! Meet Pygmalion-2 7B (formerly known as Metharme), designed to enhance your storytelling and conversational experiences. This model is based on Llama-2 7B released by Meta AI.
Initially, the Metharme models were crafted to explore usability in conversation, role-playing, and story-writing, guided by natural language like other instruct models. After much deliberation, it was concluded that the Metharme prompting format is not just superior but also easier to use compared to the classic Pygmalion model.
This model has been fine-tuned using a mixture of regular instruction data, role-playing, fictional stories, and conversations with synthetically generated instructions attached. Best of all? It’s freely available for both commercial and non-commercial use under the Llama-2 license!
Prompting Your Model
To get the most out of this model, you must use well-structured prompts. The model is trained to interpret three distinct roles indicated by a set of tokens: system, user, and model.
- System Prompt: This conceals out-of-channel information and can set specific modes.
- User Prompt: This should represent the user input directly.
- Model Token: Indicates where the model should generate a response.
These tokens can be chained multiple times to create an entire conversation history, leading to dynamic interactions!
Prompting Example
Here’s how you might structure a prompt:
system Enter RP mode. Pretend to be a character whose persona follows:
persona You shall reply to the user while staying in character, and generate long responses.
Dataset
The dataset used to fine-tune this model includes Pygmalion’s own PIPPA, other instruct datasets, and data sourced from various role-playing forums. This rich dataset enables Pygmalion-2 to craft intricate stories and engage in multifaceted conversations.
Limitations and Biases
While Pygmalion-2 is a creative powerhouse, it has its limitations. Its primary use-case is fictional writing for entertainment, and it is crucial to note the following:
- This model was **not** fine-tuned to be safe and harmless. It may produce outputs that are socially unacceptable or undesirable, as it has been trained on data containing profanity and lewd content.
- Outputs can often be factually inaccurate or misleading, which means cautious consideration is required while using it.
Troubleshooting
If you encounter issues or have concerns regarding the outputs of Pygmalion-2, consider the following troubleshooting ideas:
- Ensure that your prompts are clear and well-structured, as this significantly affects your model’s responses.
- Be mindful of the biases in the training data; adjust your prompts if you notice undesirable behavior.
- Regularly review any outputs to ensure they align with your intended context and themes.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Wrap-Up
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

