Alibi Explain v0.6.1 Released: Counterfactual Explanations for Any Model

We’ve released Alibi Explain v0.6.1 which features an integration of our research paper Model-agnostic and Scalable Counterfactual Explanations via Reinforcement Learning.

The new method, named Counterfactuals via RL, is a novel way of generating counterfactual explanations. The method includes several innovations to make it easier for practitioners to generate counterfactual explanations for any model:

  1. The method is truly model-agnostic, in particular, unlike many other counterfactual methods it does not require the model to be differentiable. By using reinforcement learning, the method can generate counterfactuals for any black-box model (e.g. decision trees, random forests, XGBoost etc.) as well as deep learning models built with frameworks such as PyTorch and Tensorflow.
  2. The method is trained as a regular machine learning model so it does not require optimization at explanation time unlike many other counterfactual methods. This also means that once trained, the method can generate counterfactual explanations in batches for many instances in parallel with minimal overhead.
  3. The method allows easy feature constraints at explanation time thus allowing the user to include domain knowledge to keep counterfactuals feasible (e.g. marking a feature like “Gender” immutable whilst allowing “Age” to only increase positively up to some threshold).

The method in Alibi Explain is available under CounterfactualRL and CounterfactualRLTabular classes to cover general classification tasks as well as tasks using heterogeneous tabular data respectively. Check out the documentation and examples for more in-depth information!

Contents