Fully-featured Beautiful Web Interface for Ollama LLMs

Sep 3, 2022 | Educational

Ollama Nextjs UI

Get up and running with Large Language Models quickly, locally, and even offline. This project aims to be the easiest way for you to get started with LLMs. No tedious and annoying setup required!

Features

  • Beautiful intuitive UI: Inspired by ChatGPT, to enhance similarity in the user experience.
  • Fully local: Stores chats in localstorage for convenience. No need to run a database.
  • Fully responsive: Use your phone to chat, with the same ease as on desktop.
  • Easy setup: No tedious and annoying setup required. Just clone the repo and you’re good to go!
  • Code syntax highlighting: Messages that include code will be highlighted for easy access.
  • Copy codeblocks easily: Easily copy the highlighted code with one click.
  • Download, Pull, Delete models: Easily download and delete models directly from the interface.
  • Switch between models: Switch between models fast with a click.
  • Chat history: Chats are saved and easily accessed.
  • Light & Dark mode: Switch between light and dark mode.

Requisites

To use the web interface, these requisites must be met:

  1. Download Ollama and have it running. Or run it in a Docker container. Check the docs for instructions.
  2. Node.js (18+) and npm is required. Download.

Deploy Your Own to Vercel or Netlify in One Click

You’ll need to set your OLLAMA_ORIGINS environment variable on your machine that is running Ollama: OLLAMA_ORIGINS=https://your-app.vercel.app.

Installation

To install the web interface, follow these instructions:

  1. Clone the repository to a directory on your PC via command prompt:
  2. git clone https://github.com/jakobhoeg/nextjs-ollama-llm-ui
  3. Open the folder:
  4. cd nextjs-ollama-llm-ui
  5. Rename the .example.env to .env:
  6. mv .example.env .env
  7. If your instance of Ollama is NOT running on the default IP address and port, change the variable in the .env file to fit your use case:
  8. NEXT_PUBLIC_OLLAMA_URL=http://localhost:11434
  9. Install dependencies:
  10. npm install
  11. Start the development server:
  12. npm run dev
  13. Go to localhost:3000 and start chatting with your favorite model!

Upcoming Features

This is a to-do list consisting of upcoming features:

  • Voice input support
  • Code syntax highlighting
  • Ability to send an image in the prompt to utilize vision language models
  • Ability to regenerate responses
  • Import and export chats

Tech Stack

This project utilizes a variety of technologies:

Troubleshooting

In case you encounter issues while setting up the Ollama LLM web interface, consider these troubleshooting tips:

  • Ensure that you have installed the correct versions of Node.js and npm. Compatibility can often resolve many issues.
  • Verify that the environment variable OLLAMA_ORIGINS is correctly set if you are running your instance not on localhost.
  • If you encounter any errors during the setup, check the console for error messages which can guide you to the specific issue.
  • Refer to the official documentation for additional help.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox