How to Use Alibi: A Guide to Machine Learning Model Interpretation

Feb 11, 2021 | Data Science

Welcome to our guide on Alibi, an incredible Python library designed for interpreting and inspecting machine learning models. In this article, we will walk you through its installation, usage, supported methods, and troubleshooting tips. Alibi Logo

Table of Contents

Installation and Usage

Alibi can be installed from either PyPI or GitHub using pip or conda. Here’s how:

With pip

  • Standard installation:
    pip install alibi
  • For the development version:
    pip install git+https://github.com/SeldonIO/alibi.git
  • For distributed computation support:
    pip install alibi[ray]
  • For SHAP support:
    pip install alibi[shap]

With conda

To install Alibi from conda-forge, it’s recommended to use mamba:

  • Install mamba:
    conda install mamba -n base -c conda-forge
  • For standard installation:
    mamba install -c conda-forge alibi
  • For distributed computing support:
    mamba install -c conda-forge alibi-ray
  • For SHAP support:
    mamba install -c conda-forge alibi-shap

Using Alibi: An Analogy

Imagine trying to drive a car without a dashboard. You wouldn’t know how fast you’re going, if you’re close to running out of fuel, or if the engine has a problem. Alibi is like that dashboard for machine learning models. It helps you understand what’s happening inside the “black box” of your models, providing clear insights into their behavior.

Example Usage

The Alibi explanation API takes inspiration from scikit-learn, involving distinct steps to initialize, fit, and explain. Let’s break it down:


from alibi.explainers import AnchorTabular

# Initialize and fit explainer
explainer = AnchorTabular(predict_fn, feature_names=feature_names, category_map=category_map)
explainer.fit(X_train)

# Explain an instance
explanation = explainer.explain(x)

The returned explanation is an Explanation object that contains both meta and data attributes. The meta attribute holds information about the explainer, while data contains the actual computed explanation.

Supported Methods

Here’s a glimpse of some supported explanation methods:

Model Confidence

Certain algorithms provide instance-specific scores for measuring model confidence. Examples include:

Prototypes

Some algorithms, like ProtoSelect, provide distilled views of datasets to build interpretable classifiers.

References and Examples

To deepen your understanding, explore these example applications:

Troubleshooting

If you run into issues during installation or usage, consider the following troubleshooting tips:

  • Ensure your Python environment is updated to the latest version.
  • If using pip, make sure your installation commands are correct, and you’re connected to the internet.
  • For conda users, check that mamba is installed correctly and that your commands point to the right channels.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Citations

If you find Alibi useful in your research, please cite the following:

BibTeX entry: @article{JMLR:v22:21-0017, author = {Janis Klaise and Arnaud Van Looveren and Giovanni Vacanti and Alexandru Coca}, title = {Alibi Explain: Algorithms for Explaining Machine Learning Models}, journal = {Journal of Machine Learning Research}, year = {2021}, volume = {22}, number = {181}, pages = {1-7}, url = {http://jmlr.org/papers/volume22/21-0017/21-0017.pdf}}

Conclusion

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox