How to Set Up Your Self-Hosted AI Starter Kit

Category :

The **Self-hosted AI Starter Kit** is a powerful tool designed to jumpstart your local AI and low-code development environment. Curated by n8n, this open docker compose template integrates various components to help you build self-hosted AI workflows easily. In this article, we will walk you through the setup process and provide troubleshooting advice along the way.

n8n.io - Screenshot

What’s Included

With the Self-hosted AI Starter Kit, you’ll get access to a curated collection of tools, including:

  • Self-hosted n8n – A low-code platform boasting over 400 integrations.
  • Ollama – A cross-platform LLM platform for running local LLMs.
  • Qdrant – An open-source, high-performance vector store.
  • PostgreSQL – The reliable backbone for handling large datasets.

Installation Guide

For Nvidia GPU Users

To start with your setup:

git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
cd self-hosted-ai-starter-kit
docker compose --profile gpu-nvidia up

If you are unfamiliar with using Nvidia GPU with Docker, make sure to follow the instructions on the Ollama Docker instructions.

For Mac (Apple Silicon) Users

If you’re using an M1 or newer Mac, you have two options due to restrictions in GPU access:

  1. Run the starter kit entirely on CPU (follow the section below).
  2. Run Ollama directly on your Mac for optimized performance. Check the Ollama homepage for installation instructions and execute:
  3. git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
    cd self-hosted-ai-starter-kit
    docker compose up

After this, adjust the credentials to connect Ollama from the n8n instance using http://host.docker.internal:11434 as the host.

For Everyone Else

git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
cd self-hosted-ai-starter-kit
docker compose --profile cpu up

Quick Start and Usage

The heart of the Self-hosted AI Starter Kit is a docker compose file, preconfigured to get you up and running:

  1. Open http://localhost:5678 in your browser to set up n8n — you’ll only need to do this once.
  2. Access the default workflow at http://localhost:5678/workflows/rOnR8PAY3u4RSwb3 and click “Test workflow” to run it.
  3. If it’s your first time, you may need to wait for Ollama to download Llama3.1; check the docker console logs for status updates.

Whenever you want to access n8n, simply visit http://localhost:5678 in your browser.

Your Powerful AI Workflow

Using your n8n instance, you will have access to over 400 integrations, along with basic and advanced AI nodes. To ensure everything stays local, make sure to utilize the Ollama node for your language model and Qdrant as your vector store.

Upgrading Your Installation

For Nvidia GPU Users

docker compose --profile gpu-nvidia pull
docker compose create
docker compose --profile gpu-nvidia up

For Mac (Apple Silicon) Users

docker compose pull
docker compose create
docker compose up

For Everyone Else

docker compose --profile cpu pull
docker compose create
docker compose --profile cpu up

Troubleshooting and Tips

If you find yourself stuck at any point, here are some troubleshooting tips:

  • Ensure Docker is properly installed and running.
  • Check your internet connection for downloading necessary components.
  • Review the console logs for any error messages from Docker.

For additional insights, updates, or collaboration opportunities on AI development projects, stay connected with fxis.ai.

Learn AI Key Concepts

Deepen your understanding with the following resources:

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox

Latest Insights

© 2024 All Rights Reserved

×