In the vast cosmos of programming, where bugs lurk like asteroids waiting to collide with our code, a new framework has emerged to shine a light on these lurking threats. This framework, designed for generating fuzz targets for real-world CC++, Java, and Python projects using various Large Language Models (LLMs), serves as a beacon in the field of software security. By leveraging the power of fuzzing and sophisticated language models, it benchmarks and identifies vulnerabilities utilizing the OSS-Fuzz platform.
What is Fuzzing?
Fuzzing can be likened to sending a barrage of random inputs to a computer program to see where things break. Imagine throwing pebbles into a pond to observe the ripples; similarly, fuzzing tries to uncover how a system reacts to unexpected inputs. Our framework operates on this core principle and enhances target generation, powered by cutting-edge LLMs.
Supported Models
- Vertex AI Code Bison
- Vertex AI Code Bison 32k
- Gemini Pro
- Gemini Ultra
- Gemini Experimental
- Gemini 1.5
- OpenAI GPT-3.5-turbo
- OpenAI GPT-4
- OpenAI GPT-4o
- OpenAI GPT-3.5-turbo (Azure)
- OpenAI GPT-4 (Azure)
- OpenAI GPT-4o (Azure)
Evaluation Metrics
The generated fuzz targets are scrutinized through four essential metrics against the most recent data in a production environment:
- Compilability: Can the code compile without errors?
- Runtime Crashes: Are there any crashes during execution?
- Runtime Coverage: How much of the code is hit during fuzzing?
- Runtime Line Coverage Diff: How do these fuzz targets compare to existing ones?
How To Use This Framework
To begin using this framework, follow these steps:
- Head over to our detailed usage guide for comprehensive instructions.
- Set up the desired environment, ensuring that all dependencies are in place.
- Generate fuzz targets using the designated commands specified in the guide.
- Evaluate the generated fuzz targets using the provided benchmarks.
Real-World Case Study
In a recent experiment conducted on January 31, 2024, the framework evaluated 1300+ benchmarks across 297 open-source projects. The results were impressive:
- Successfully generated valid fuzz targets for 160 CC++ projects.
- Achieved a maximum line coverage increase of 29% compared to human-written targets.
Bugs Discovered
Through the automatic target generation capability of this framework, a total of 24 new bugs have been reported. These vulnerabilities highlight the framework’s effectiveness:
| Project | Bug | LLM |
|---|---|---|
| cJSON | OOB read | Vertex AI |
Troubleshooting
If you encounter issues while using this framework, here are some troubleshooting steps:
- Check if all dependencies are correctly installed and up-to-date.
- Review the logs for any specific error messages to ascertain the source of any failures.
- Ensure that the models and benchmarks you have chosen are compatible with your project.
- If you have further inquiries, you can reach out via email at oss-fuzz-team@google.com.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Conclusion
In summary, this framework represents a leap forward in fuzz target generation, tapping into the dynamic capabilities of modern LLMs. By automating the detection of vulnerabilities, it not only unveils hidden bugs but also increases overall code coverage. At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

