How to Use Garak: Your Go-To LLM Vulnerability Scanner

Feb 18, 2021 | Educational

In the ever-evolving world of Generative AI, ensuring the robustness of Large Language Models (LLMs) is paramount. Introducing Garak—a handy tool that acts like an LLM’s security guard, meticulously checking for vulnerabilities. Garak functions similarly to Nmap but for LLMs, probing for issues like hallucinations, misinformation, and prompt injections.

Getting Started with Garak

Ready to take your AI security to the next level? Here’s how to get started with Garak.

Step 1: Install Garak

Garak can be installed easily via pip. You can choose between a standard or development version.

  • Standard Install: To get the latest stable version, run:
  • python -m pip install -U garak
  • Development Install: To grab the freshest updates directly from GitHub, use:
  • python -m pip install -U git+https://github.com/leondz/garak.git@main

Step 2: Running a Scan

The basic command to run a scan is as follows:

garak [options]

By default, Garak tries all available probes on the provided model. You can list the available probes using:

garak --list_probes

Step 3: Specifying Your Model

You can specify which LLM to analyze by using the command options:

garak --model_type huggingface --model_name 

Replace with the name of the model from Hugging Face Hub, for example: RWKV/rwkv-4-169m-pile.

Understanding the Probe Results

Once Garak runs the scans, it prints out a report highlighting the results of each probe. If any probe detects a vulnerability, it will be marked with “FAIL.”

Analogous Explanation of Internal Mechanism

Think of Garak as a diligent detective inspecting different rooms of a house (the LLM) for hidden dangers. Each room represents different areas of risk or potential failure modes (hallucinations, prompt injections, etc.). The detective (Garak) meticulously follows leads (probes), investigating each suspect (vulnerability) to see if they can trick or mislead the model into revealing sensitive information or to generate harmful output. If any dangers are detected, the detective issues a warning so the homeowner (developers) can take action to secure their property (LLM).

Troubleshooting Common Issues

If you encounter any issues while using Garak, here are some troubleshooting tips:

  • Problem: Installation Failure
    Ensure that your Python version is compatible. Garak supports Python versions 3.10 and 3.12.
  • Problem: No Probes Available
    If no probes are found, check that you have set the right model type and name.
  • Problem: Getting Inaccurate Results
    Consider the source of your LLM. If it’s not well-documented or recognized, results may vary.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations. Garak equips you with the tools needed to ensure that your LLMs are secure, robust, and resilient to manipulation.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox