Alibi - Machine Learning Model Explainability and Compliance
Open-source Python library enabling ML model inspection and interpretation.
See inside the black box
Alibi is designed to help explain the predictions of machine learning models and gauge the confidence of those predictions.
The library is designed to support the widest possible range of models that use black-box methods.The open-source project goal is to increase the capabilities for inspecting the performance of models with respect to concept drift and algorithmic bias.
Integrate model explanations into your own projects quickly and easily
Your project, your technology
Alibi is language and toolkit agnostic, so you can use the technologies that best suit your business
Implement and extend the Alibi library in whatever way you want to generate the model explanations you need
- Provide high quality reference implementations of black-box ML model explanation algorithms
- Define a consistent API for interpretable ML methods
- Support multiple use cases (e.g. tabular, text and image data classification, regression)
- Implement the latest model explanation, concept drift, algorithmic bias detection and other ML model monitoring and interpretation methods.