Easily Build LLM-Powered Applications with Tanuki

Mar 11, 2024 | Educational

In the fast-paced world of programming, the ability to utilize large language models (LLMs) for efficient, powerful applications is a game-changer. This is where Tanuki steps in, allowing developers to create LLM-powered functions in Python seamlessly. In this blog, we’ll explore how to harness the potential of Tanuki, along with troubleshooting tips and practical demonstrations.

Introduction

Tanuki provides a streamlined method for invoking LLMs in a way that’s not only straightforward but also designed to enhance app performance over time. By replacing traditional function implementations with Tanuki’s LLM-powered alternatives, you can ensure reliability and efficiency while minimizing costs as demand increases.

Features

  • Easy Integration: Decorate your function with @tanuki.patch for seamless LLM enhancement.
  • Type Awareness: Ensure LLM outputs conform to specified types, reducing bugs.
  • Aligned Outputs: Capture expected behavior using simple assert statements.
  • Cost Effective: Reduce operational costs and latency with increased usage.
  • Popular Model Support: Utilize a range of prominent models like OpenAI and Together AI.
  • RAG Support: Manage outputs for efficient downstream applications.
  • Batteries Included: Minimal dependencies mean quick implementation.

Installation and Getting Started

Installation

To get started with Tanuki, install it using pip:

pip install tanuki.py

Alternatively, if you are using Poetry, you can install with:

poetry add tanuki.py

Make sure to set your OpenAI key:

export OPENAI_API_KEY=sk-...

Getting Started

Here’s how to create your first Tanuki function:

  1. Define a function with @tanuki.patch, including type hints.
  2. (Optional) Create a function with @tanuki.align to describe expected behavior.
  3. (Optional) Configure your preferred model, defaulting to GPT-4.

Here’s an example of a simple sentiment classifier:

@tanuki.patch
def classify_sentiment(msg: str) - Optional[Literal[Good, Bad]]:
    # Classifies a message from the user into Good, Bad or None.
    
@tanuki.align
def align_classify_sentiment():
    assert classify_sentiment("I love you") == "Good"
    assert classify_sentiment("I hate you") == "Bad"
    assert not classify_sentiment("People from Phoenix are called Phoenicians")

if __name__ == "__main__":
    align_classify_sentiment()
    print(classify_sentiment("I like you"))  # Expected output: Good

How It Works

To illustrate how Tanuki functions, consider a chef preparing dishes. Each dish (LLM function) can be made with various ingredients (parameters) and results in a specific meal (output). The more the chef practices (uses the function), the better (faster and cheaper) their meals become. Each time a chef crafts a dish from small portions of experience, they refine the recipe until they achieve high efficiency and taste.

Typed Outputs

By defining your inputs and outputs with clear types, you create a safe workflow that guards against unpredicted outputs. This is like ensuring a package is wrapped securely before shipping it, providing assurance it arrives intact.

from dataclasses import dataclass
from datetime import datetime, timedelta
from pydantic import Field

@dataclass
class ActionItem:
    goal: str = Field(description="What task must be completed")
    deadline: datetime = Field(description="The date the goal needs to be achieved")

@tanuki.patch
def action_items(input: str) - List[ActionItem]:
    # Generate a list of Action Items

@tanuki.align
def align_action_items():
    goal = "Can you please get the presentation to me by Tuesday?"
    next_tuesday = (datetime.now() + timedelta((1 - datetime.now().weekday() + 7) % 7)).replace(hour=0, minute=0, second=0, microsecond=0)
    assert action_items(goal) == ActionItem(goal="Prepare the presentation", deadline=next_tuesday)

Test-Driven Alignment

Tanuki embraces a test-first philosophy, aligning actual output with expected behaviors. This is akin to constructing a bridge: you lay down the foundation (asserts) ensuring the structure will hold (function output). This alignment fosters reliability.

@tanuki.align
def align_score_sentiment():
    assert score_sentiment("I love you") == 10
    assert score_sentiment("I hate you") == 0
    assert score_sentiment("You're okay, I guess") == 5

Scaling and Finetuning

As you apply Tanuki, the data gained from execution builds a reservoir of knowledge, allowing efficiency improvements. It’s similar to a video game character that gets stronger after leveling up through experience.

Frequently Asked Questions

What is Tanuki in plain words?

It’s a straightforward way to utilize LLMs in Python while ensuring output consistency and decreasing execution costs with increased function calls.

How do I align my functions?

To align functions, refer back to the How It Works and Test-Driven Alignment sections for thorough instructions.

Why would I need typed responses?

Typed outputs prevent erratic behavior by implementing rules that the LLMs must adhere to, ensuring stable application behavior.

Troubleshooting

If you encounter issues such as unexpected output or errors in function alignment, here are some tips to resolve them:

  • Revisit your action items and ensure that your assert statements reflect the anticipated outcomes.
  • Check if your model is set correctly by inspecting the @tanuki.patch configurations.
  • Clear your cache to eliminate any outdated alignment data.
  • For persistent issues, consult the documentation or reach out to the community.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

Tanuki opens a pathway to integrate powerful LLM functions into your Python projects effortlessly. With its built-in capabilities for fine-tuning and cost-effectiveness, it’s a must-have in your development toolkit. At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox