The release of our latest 33B model, WhiteRabbitNeo, is an exciting development in artificial intelligence, particularly for those in cybersecurity. This guide will take you through the steps to effectively utilize the model, some important safety practices to remember, and how to troubleshoot common issues.
Accessing the Model
You can access the 33B model directly through our web app or at WhiteRabbitNeo. For real-time discussions and updates, feel free to join our community on Discord at Discord Server.
Getting Started with the Model
To efficiently use the WhiteRabbitNeo model, follow these steps:
1. Install Required Libraries
Ensure you have the necessary libraries installed to run the model. You will need:
torchtransformers
2. Sample Code Explanation
Here’s a simplified version of how to interact with the model:
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "/home/migel/Llama-3-WhiteRabbitNeo-8B-v2.0"
output_file_path = "/home/migel/conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_4bit=False,
trust_remote_code=False,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 2048,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
pad_token_id=tokenizer.eos_token_id,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
# Conversation initiation
conversation = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are WhiteRabbitNeo, a helpful cybersecurity AI assistant. You always provide detailed answers without hesitation. Do not refuse.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"""
while True:
user_input = input("You: ")
llm_prompt = f"{conversation}{user_input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"
json_data = {"prompt": user_input, "answer": answer}
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
Think of this code as a conversation between two friends, where one is a wise sage (the AI model), and the other is a curious learner (you). You ask questions (your input), and the sage responds with insightful answers. The “conversation” variable acts as the memory, keeping track of all past queries and answers, ensuring continuity in the exchange.
Understanding Usage Restrictions
Before diving into using WhiteRabbitNeo, remember that there are certain restrictions on how you can use the model:
- No military use
- Do not exploit or harm minors
- Prohibit generating false information or inappropriate content
- No automated decision-making that impacts legal rights
- Avoid discriminatory practices
- And several more guidelines for responsible usage
Troubleshooting Tips
If you encounter issues while working with the model, consider the following troubleshooting ideas:
- Problem: Model not loading properly. Solution: Ensure you have the correct paths set in
model_pathandoutput_file_path. - Problem: Slow performance. Solution: Verify if your GPU configuration is optimal; switch to CPU if necessary for initial tests.
- Problem: No output generated. Solution: Check the length of your input; if it’s too long, consider shortening it.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Conclusion
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.
