How to Integrate OpenLIT for Effective LLM Observability

Feb 2, 2021 | Data Science

OpenLIT is your go-to solution for insightful monitoring of Large Language Model (LLM) applications in production environments. Utilizing its OpenTelemetry-native capabilities, OpenLIT monitors both self-hosted and third-party LLMs seamlessly with minimal setup. In this blog post, we’ll take you through the steps to get OpenLIT up and running, ensuring you can efficiently track the performance and reliability of your LLM applications.

Understanding the Basics

Think of OpenLIT as a sophisticated fitness tracker, but for your LLM applications. Just like a fitness tracker keeps tabs on your heart rate, calories burned, and steps taken, OpenLIT collects vital telemetry data about your LLMs – including input, output, and GPU performance metrics. This allows you to observe how well your application is doing and how resources are being utilized, paving the way for optimization and improved performance.

Features of OpenLIT

  • Advanced Monitoring: Automatically generates traces and metrics for a complete view of your LLM and VectorDB performance.
  • Cost Tracking: Customize cost estimation using a JSON file tailored for specific models.
  • Vendor-neutral SDKs: Made to integrate effortlessly with your projects as it adheres to OpenTelemetry standards.

Getting Started with OpenLIT

Ready to dive in? Follow these steps to set up OpenLIT and start collecting observability data for your LLM applications.

Step 1: Deploy OpenLIT Stack

  1. Clone the OpenLIT Repository:
    git clone git@github.com:openlit/openlit.git
  2. Start Docker Compose:
    docker compose up -d

Step 2: Install OpenLIT SDK

Open your command line or terminal and run:

pip install openlit

Step 3: Initialize OpenLIT in Your Application

Integrate OpenLIT into your application with just two lines of code:

import openlit
openlit.init()

To set the telemetry data endpoint, you can use the otlp_endpoint parameter or set the environment variable OTEL_EXPORTER_OTLP_ENDPOINT according to the OpenTelemetry documentation.

Step 4: Visualize and Optimize!

With observability data now being collected, navigate to the OpenLIT UI in your browser at 127.0.0.1:3000 to start exploring the performance statistics of your LLM applications. Log in with:

  • Email: user@openlit.io
  • Password: openlituser

Troubleshooting

If you encounter any issues, here are some troubleshooting steps to consider:

  • Ensure that the Docker services are running by checking the output of docker compose ps.
  • Verify that the correct endpoint is set for telemetry data. If you’re unsure, use a console output during development for simpler debugging.
  • If you’re not seeing any data in OpenLIT UI, check your network configurations or firewalls that might block the necessary connections.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox