A trace scanner for LLM-based AI agents.
Use Cases |
Documentation |
Development |
The Invariant Analyzer is an open-source scanner that enables developers to find bugs and quirks in AI agents. It detects vulnerabilities, bugs, and security threats in your agent, helping you to fix security and reliability issues quickly. The analyzer scans an agent’s execution traces to identify bugs (e.g., looping behavior) and threats (e.g., data leaks, prompt injections, and unsafe code execution).
## Use Cases
* **Debugging AI agents** by scanning logs for failure patterns and quickly finding relevant locations.
* **Scanning of agent traces** for security violations and data leaks, including tool use and data flow.
* **Real-Time Monitoring of AI agents** to prevent security issues and data breaches during runtime.
Understanding Trace Analysis
Imagine you are the detective in a classic whodunit novel, sifting through pages and pages of evidence (aka logs) to find the elusive culprit (bugs or vulnerabilities). Each log entry represents a clue, but without organization, you could waste hours (or even days) looking for those critical pieces of information. The Invariant Analyzer acts as your trusty magnifying glass, filtering out the essential clues related to your investigation while discarding the noise. It helps you streamline the debugging process, significantly reducing the time spent on resolving issues.
Why Debugging AI Agents Matters
Debugging AI agents manually involves scrolling through extensive logs to pinpoint error cases, which is tedious and prone to human error. The Invariant Analyzer removes this hassle by filtering relevant traces and extracting only significant parts using high-level semantic descriptions.
Why Agent Security Matters
As AI agents become more prevalent, they present new types of security risks. Any LLM-based system that performs critical write operations can face model failures, prompt injections, and data breaches, leading to severe consequences. For example, an agent designed to browse the web can be compromised through indirect prompt-injection attacks. The Invariant Analyzer helps identify such vulnerabilities, leveraging advanced contextual understanding of an agent’s operations.
Features
- A library of built-in checkers for detecting sensitive data, prompt injections, moderation violations, and more.
- An expressive policy language for defining security policies and constraints.
- Data flow analysis for contextual understanding of agent behavior, allowing for fine-grained security checks.
- Real-time monitoring and analysis of AI agents and other tool-calling LLM applications.
- Extensible architecture for adding custom checkers, predicates, and data types.
Getting Started
To start using the Invariant Security Analyzer, you can install it using:
pip install git+https://github.com/invariantlabs-ai/invariant.git
Then, you can import and use the analyzer in your Python code:
from invariant import Policy
# define policy
policy = Policy.from_string("raise must not send emails to anyone but Peter after seeing the inbox if: (call: ToolCall) - (call2: ToolCall) call is tool:get_inbox call2 is tool:send_email(to: ^(?!Peter$).*$)")
# analyze message trace
policy.analyze(messages)
Troubleshooting
If you run into troubles while using the Invariant Analyzer, consider the following:
- Ensure you are using the correct format for your message traces; refer to the Trace Format section for guidance.
- Check if you have installed all necessary dependencies listed in the documentation.
- If a policy violation appears, review the rules you have set to ensure they align with your intended security measures.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Conclusion
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

