Seldon has just released v0.4.0 of our outlier, adversarial & data drift detection library Alibi Detect with a multitude of new features that enable organisations to minimise risk and keep on top of their machine learning models. As AI and deep learning techniques continue to rapidly develop, these techniques are increasingly critical for businesses to uphold their cybersecurity and stack robustness. Adversarial efforts are widely considered to be a significant obstacle to getting models into production and knowing they can be swiftly and effectively dealt with is essential to deploying ML models with confidence and ease.
The library now includes Kolmogorov-Smirnov and Maximum Mean Discrepancy data drift detectors, including preprocessing methods described in the NeurIPS 2019 paper Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift. These drift detectors can warn you when incoming data diverges from the training data and then flag that the model needs to be retrained in order to maintain the performance. This can be critical in live situations when model drift can impact the business in very short periods of time.
The release comes with extensive notebooks illustrating how to detect covariate, label and malicious data drift on CIFAR-10-C (Hendrycks & Dietterich, 2019), a dataset where the instances have been corrupted by various common types of noise, blur, brightness etc at different levels of severity, leading to a gradual decline in the machine learning model performance. CIFAR-10-C is also included in the library’s datasets to allow further research and experimentation.
Alibi Detect already contains online and offline outlier detection methods for tabular, image and time-series data and adversarial detection for tabular and image data. Check out the docs or email firstname.lastname@example.org for more info!