PAR LLAMA is an incredible TUI application designed to help users effectively manage and utilize Ollama-based LLMs across multiple operating systems, including Windows, Windows WSL, Mac, and Linux. In this blog, we’ll walk you through the prerequisites, installation procedures, and how to use the application to its fullest potential. So, let’s dive in!
Table of Contents
- About
- Prerequisites for Running
- Installing from MyPI using pipx
- Installing from MyPI using pip
- Command Line Arguments
- Running PAR_LLAMA
- Quick Start Chat Workflow
- Custom Prompts
- Themes
- FAQ
- What’s New
About
PAR LLAMA is a TUI application that simplifies the management and utilization of Ollama-based large language models (LLMs). Built using Textual and Rich, it features both dark and light modes, as well as custom themes, catering to different user preferences.
Prerequisites for Running
- Install and run Ollama.
- Install Python 3.11 or newer. You can find the installers at python.org.
- On Windows, use Scoop to easily manage Python installation by running:
scoop install python.
Installing from MyPI Using pipx
If you don’t have pipx installed, you can install it using:
pip install pipx
pipx ensurepath
Once pipx is installed, run the following command:
pipx install parllama
To upgrade an existing installation, use the –force flag:
pipx install parllama --force
Installing from MyPI Using pip
Create a virtual environment and install PAR LLAMA with the following commands:
mkdir parllama
cd parllama
python -m venv venv
source venv/Scripts/activate
pip install parllama
Command Line Arguments
PAR LLAMA supports several command line arguments which allow customization:
usage: parllama [-h] [-d DATA_DIR] [-u OLLAMA_URL] [-t THEME_NAME] [-m dark,light]
[-s local,site,chat,prompts,tools,create,options,logs] ...
Some notable flags include:
-h: show help message-v: show version information-d DATA_DIR: Data Directory (defaults to ~.parllama)-u OLLAMA_URL: URL of your Ollama instance (defaults tohttp://localhost:11434)-t THEME_NAME: Set the theme name
Running PAR LLAMA
With pipx Installation
To run PAR LLAMA from anywhere, simply use:
parllama
With pip Installation
Activate your virtual environment and execute:
source venv/Scripts/activate
parllama
Quick Start Chat Workflow
Here’s a basic workflow to follow:
- Start PAR LLAMA.
- Click the “Site” tab and use
^Rto fetch the latest models. - Filter models by typing “llama3”.
- Select the llama3 model and click
^Pto pull it to your local machine. - Once downloaded, you can jump to the chat tab and type a message.
Custom Prompts
Users can create a library of custom prompts for quick chat initiation. You can set up system prompts, and user messages which can be sent immediately upon loading.
Themes
Themes are JSON files stored in the themes folder and offer options for both dark and light modes. Modifications can be made using the --theme-name option while starting the application.
FAQ
- Do I need Docker? Docker is required only for quantizing models from Hugging Face.
- Does PAR LLAMA require internet access? By default, no internet access is needed unless updates are enabled.
- Can PAR LLAMA run on ARM? Yes, it can run wherever Python is compatible.
What’s New
Discover the latest updates in PAR LLAMA:
- v0.3.7: Fixes potential crashes with multiple models.
- v0.3.6: Added options for chat input history saving.
- v0.3.5: First launch welcome feature added.
Troubleshooting
If you encounter issues while using PAR LLAMA, consider the following troubleshooting steps:
- Ensure that all prerequisites are adequately installed.
- Verify that your Ollama server is running properly.
- Check environment variables have the correct values set.
- If you face connectivity issues, confirm the URL specified matches your Ollama instance.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

