Deepchecks – Continuous Validation for AI & ML: Testing, CI Monitoring

Jul 27, 2023 | Data Science

Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling you to thoroughly test your data and models from research to production. Think of it as a robust educational system that helps ensure your students (the models) are ready for the final exam (production). The more they are tested and monitored during their studies, the more equipped they’ll be to thrive in real-world situations.

Components of Deepchecks

Deepchecks includes:

  • Deepchecks Testing – Offering built-in and custom checks for Tabular, NLP, and CV validation.
  • CI Testing Management – Collaborates over test results for efficient iterations until models are production-ready.
  • Deepchecks Monitoring – Tracks and validates your deployed model’s behavior when in production.

This repository is central since all components utilize the deepchecks framework at their core. If you want to see the monitoring code, check out the deepchecks monitoring repository.

Getting Started

Installation

To install Deepchecks Testing (and CI), run:

pip install deepchecks -U --user

For NLP models, replace “deepchecks” with “deepchecks[nlp]”. For Computer Vision, use “deepchecks[vision]”. For a full installation guide, refer to the installation documentation.

To use Deepchecks for production monitoring, you can deploy a local instance in one line on Linux/MacOS with Docker. Just create a directory for installation files, open a terminal and run:

pip install deepchecks-installer
deepchecks-installer install-monitoring

The installation will take a few minutes, and then you can access the application at localhost.

Quickstarts

Jump right into the respective quickstart documentation for:

For example, to set up a testing suite in Python, your code will look like this:

from deepchecks.tabular.suites import model_evaluation

suite = model_evaluation()
suite_result = suite.run(train_dataset=train_dataset, test_dataset=test_dataset, model=model)
suite_result.save_as_html()  # or suite_result.show()

This gives you a report to inspect your checks and understand your models better.

How Deepchecks Works

At its core, Deepchecks revolves around a variety of built-in checks to test for data and model issues. Think of these checks as different quizzes for your students (the models). Each check focuses on a specific area:

  • Model performance (e.g., identifying weak segments)
  • Data distribution (e.g., detecting drift or leakages)
  • Data integrity (e.g., finding conflicting labels)

These checks can be run manually during research or automatically during CI processes, ensuring your models remain in tip-top shape.

Troubleshooting Ideas

If you encounter any issues while using Deepchecks, consider the following troubleshooting strategies:

  • Ensure you have the correct versions of dependencies installed.
  • Check the online documentation for any updates or changes in API implementations.
  • Look for any recent issues in the GitHub Issues page, where community members discuss solutions.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox