Seldon releases Alibi Explain 0.5.0

The Seldon data science team are delighted to announce the release of v0.5.0 of Alibi Explain with three new techniques for explaining the predictions of machine learning models. This release features Integrated Gradients for Tensorflow models, Accumulated Local Effects for black-box models and TreeSHAP explanations for tree-based models such as gradient boosted models and random forests. These features build on the work recently recognised by the CogX Innovation Awards for ‘Best Innovation in Explainable AI’.

Integrated Gradients (IG, Sundararajan et al., 2017) is a feature attribution explanation method that explains the predictions of a model by assigning importance scores to each input feature. The resulting explanation gives a signed measure of the positive or negative influence of each feature to the predicted value. IG can work with any type of data including tabular, text and image data. Below is an example of a text classification task with a model predicting positive sentiment for a sample movie review, the IG attribution scores are color-coded green and pink (positive and negative) revealing the effect of each word on the positive prediction.

Accumulated Local Effects (ALE, Apley and Zhu, 2016) is a method for understanding the effects of features on the model prediction. It is an alternative to the popular Partial Dependence Plots (PDP) technique, addressing some key shortcomings of PDP such as feature independence assumption and making inferences on out-of-distribution data points. It can be seen as a “global” explanation method as it uses a training set to elicit the effect of each feature on the model predictions. ALE can be applied for tabular datasets with numeric features. Below is an example output of ALE run on a logistic regression classifier for the Iris dataset, the explanation correctly recovers the linear effect of every feature on the class predictions.

Tree Shapley Additive Explanations (TreeSHAP, Lundberg et al., 2020) is a specialized SHAP method for tree-based models which computes fast and exact feature attributions (SHAP values). We build on the excellent shap package by the authors of the paper providing a wrapper that conforms to the Alibi Explain API and exposes the configuration of the explainer in a straightforward way. The TreeSHAP algorithm is a white-box method that works on most tree-based methods such as decision trees, random forests and gradient boosted models (e.g. xgboost, lightgbm, catboost). Below is an example of the output of TreeSHAP explanations on an xgboost model for an income classification task. The force plot shows how each feature contributes positively or negatively to the output of the model.

As always, the new methods are accompanied by in-depth descriptions of use cases and comprehensive examples. To get started, visit our documentation page.

Contents