What is Drift Detection?
Deployed machine learning models can fail spectacularly in response to seemingly benign changes to the underlying process being modelled. Concerningly, when labels are not available, as is often the case in deployment settings, this failure can occur silently and go unnoticed. This can pose great risk to an organisation.
Drift detection is the discipline focused on detecting such changes and awareness of its importance is growing among machine learning practitioners and researchers. In this video, Seldon researcher Oliver Cobb provides an introduction to the subject, explaining how drift can occur, why it pays to detect it and how it can be detected in a principled manner. Of particular focus are the practicalities and challenges around detecting it as quickly as possible in deployment settings where high dimensional and unlabelled data is arriving continuously.
Those interested in adding drift detection functionality to their own projects are encouraged to check out our open-source Python library alibi-detect.
Don’t miss Oliver Cobb’s upcoming talk at EuroPython on ‘Protecting Your Machine Learning Against Drift: An Introduction’ where he will build on the topics discussed in this presentation. It’s on Thursday 29th July at 12pm CEST.

As an applied machine learning researcher at Seldon, Oliver’s research focuses on addressing various technical challenges that arise from users’ desire to deploy their models in a robust and responsible manner. Stemming from a broader interest in maths and statistics, Oliver has five years of experience in machine learning research during which he has published at top conferences including ICML, ICLR and AISTATS.