Welcome to the enchanting world of AI storytelling! The Storytime 13B model, created by Charles Goddard, is designed to provide immersive storytelling experiences. But how can you access and utilize this fascinating model? Let’s dive in!
What is Storytime 13B?
Storytime 13B is a language model that specializes in creating narrative content. It’s built on the foundations of advanced AI technologies and allows users to engage with unique storytelling prompts. This repository provides various quantized parameter options to help tailor the model to your hardware’s capabilities.
Downloading Storytime 13B
To start using the Storytime 13B model, follow these steps based on how you prefer to download it:
Option 1: Via Text-Generation-WebUI
- Ensure you have the latest version of text-generation-webui.
- Go to the **Model tab**.
- Under **Download custom model or LoRA**, type TheBloke/storytime-13B-GPTQ. To specify a branch, use TheBloke/storytime-13B-GPTQ:gptq-4-32g-actorder_True.
- Click **Download** and wait until it completes.
- After the download, refresh the model list, select your model, and start generating stories!
Option 2: Command Line Download
If you prefer using the command line, here are the steps:
pip3 install huggingface-hub
mkdir storytime-13B-GPTQ
huggingface-cli download TheBloke/storytime-13B-GPTQ --local-dir storytime-13B-GPTQ --local-dir-use-symlinks False
To download from a different branch, add the –revision parameter:
huggingface-cli download TheBloke/storytime-13B-GPTQ --revision gptq-4-32g-actorder_True --local-dir storytime-13B-GPTQ --local-dir-use-symlinks False
Using Storytime 13B in Python
Once you have successfully downloaded the model, you can start using it in your Python environment. First, ensure you have all necessary packages installed:
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/auto-gptq-index/wheel/cu118 # Use cu117 if on CUDA 11.7
After installing the necessary packages, use this code template to interact with the Storytime model:
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/storytime-13B-GPTQ"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
prompt = "Tell me a story about AI"
output = model.generate(tokenizer(prompt, return_tensors="pt").input_ids.cuda(), max_new_tokens=512)
print(tokenizer.decode(output[0]))
Troubleshooting
Running into issues? Here are some troubleshooting tips:
- Ensure all dependencies are installed correctly.
- If the model fails to load, double-check your paths and branch names.
- For better performance, consider adjusting the quantization parameters according to your hardware capabilities.
- Refer to the Hugging Face documentation for detailed installation instructions.
- For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Key Takeaways
With Storytime 13B, storytelling becomes an AI-driven adventure! Whether through direct command-line downloads or via user-friendly web interfaces, exploring narrative generation is now at your fingertips. Keep experimenting with different prompts and enhance your AI storytelling experience.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

