How to Get Started with TurboPilot: A Deprecation Story

Jun 5, 2021 | Data Science

TurboPilot, once a bustling self-hosted code co-pilot inspired by the fauxpilot project, is now archived as of September 30, 2023. With the abundance of mature alternatives available, it’s important to pivot your focus. In this guide, we’ll explore how to use TurboPilot while acknowledging its legacy and the future it paves in AI development.

Why Use TurboPilot?

TurboPilot utilized the amazing copilot technology and the llama.cpp library, leveraging a 6 Billion Parameter Salesforce Codegen model. While it may be deprecated, it remains a fascinating exploration in self-hosted AI coding assistants. Before jumping into installation, it’s wise to consider its limitations and the community’s shift toward other solutions.

Getting Started with TurboPilot

  • Choose Your Model: Before diving into installation, you need to select which model to run. Two options are available: StableCode for low RAM users, and WizardCoder for high-power users.
  • Download Pre-Processed Models: Depending on your RAM configuration, you can opt for direct downloads from Huggingface.
  • Set Up Your TurboPilot Server: Download and extract the latest binary from GitHub. Running the command in terminal will launch the server.

Running TurboPilot

Running TurboPilot can be an adventure, similar to a chef preparing a gourmet meal. Each step involves gathering operands, preparing them correctly, and ensuring everything simmers perfectly. Here’s how:

bash ./turbopilot -m starcoder -f .models/santacoder-q4_0.bin

In this analogy, think of the command like a recipe where:

  • bash: The cooking method that brings everything together.
  • ./turbopilot: The main ingredient—the core tool you want to use.
  • -m starcoder: The type of model you’re using (like choosing a specific spice).
  • -f .models/santacoder-q4_0.bin: The location of your prepped model (think of it as the pantry where you store your spices).

Using Docker for TurboPilot

If you prefer a less manual approach, TurboPilot can also be run using Docker. It’s like having a pre-made meal kit where all ingredients are neatly packaged. To run it via Docker, follow these steps:

bash docker run --rm -it -v .models:models -e THREADS=6 -e MODEL_TYPE=starcoder -e MODEL=models/santacoder-q4_0.bin -p 18080:18080 ghcr.io/ravenscroftj/turbopilot:latest

Troubleshooting Ideas

While using TurboPilot, you might encounter a few hiccups along your culinary coding journey:

  • Server Doesn’t Start: Make sure you have extracted binaries properly and that you’re in the right directory.
  • Slow Response: Auto-completion is slow in this version, so patience is key.
  • Model Compatibility: Double-check whether you downloaded the correct model type for your RAM specifications.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Final Thoughts

While TurboPilot may be deprecated, it still provides a glimpse into how self-hosting and AI can revolutionize coding. At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox