Building applications with Large Language Models (LLMs) is exciting and practical. With tools like Langchain, developing apps leveraging LLMs becomes straightforward. However, as developers, it’s crucial to monitor how our apps operate, ensuring that our use of abstractions does not obscure critical details.
Understanding Prefect
Prefect is a robust framework designed for constructing, running, and observing workflows that are event-driven. It allows for seamless deployment across diverse runtime environments, from AWS to Google Cloud and Kubernetes, making it a perfect tool for monitoring LLM-based applications.
Key Features of Langchain-Prefect
- RecordLLMCalls: A ContextDecorator that facilitates tracking LLM calls made with Langchain as Prefect flows.
Step-by-Step Guide to Track LLM Calls
1. Call an LLM and Track the Invocation
Here’s how you can invoke an LLM using Prefect:
from langchain.llms import OpenAI
from langchain_prefect.plugins import RecordLLMCalls
with RecordLLMCalls():
llm = OpenAI(temperature=0.9)
text = "What would be a good company name for a company that makes colorful socks?"
llm(text)
In this code snippet, calling the LLM will create a flow run to track the invocation.
2. Run Multiple LLM Calls Through a Prefect Flow
Next, let’s see how to execute several LLM calls using Langchain agents as Prefect subflows:
from langchain.agents import initialize_agent, load_tools
from langchain.llms import OpenAI
from prefect import flow
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(tools, llm)
@flow
def my_flow():
agent.run(
"How old is the current Dalai Lama?",
"What is his age divided by 2 (rounded to the nearest integer)?"
)
with RecordLLMCalls(tags=agent):
my_flow()
Think of this scenario like a chef preparing multiple dishes (LLM calls) using different ingredients (tools) in a kitchen (Prefect flow). The chef follows a recipe (agent) to ensure every dish is cooked perfectly, and the tracking ensures you know what has been prepared and how.
Accessing the Prefect UI
The simplest way to interact with the Prefect UI is through Prefect Cloud. However, if you prefer to run the dashboard locally, just execute prefect server start in your terminal.
Installation Instructions
To install Langchain-Prefect, ensure you have Python 3.10 or higher, then run:
pip install langchain-prefect
Troubleshooting
If you encounter any hiccups while using Langchain-Prefect, try the following steps:
- Make sure that your environment is correctly set up with Python 3.10 or higher.
- Check your network connection if the Prefect UI is not loading.
- Review the repository’s issues page for similar problems faced by others.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Contributing to the Project
Want to contribute? Here’s a quick guide:
- Fork the repository.
- Clone the forked repository.
- Install necessary dependencies:
pip install -e .[dev] - Make your changes, add tests, and maintain documentation.
- Submit a pull request.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

