Protecting Your Machine Learning Against Drift

About this webinar

Deployed machine learning models can fail spectacularly in response to seemingly benign changes to the underlying process being modelled. Concerningly, when labels are not available, as is often the case in deployment settings, this failure can occur silently and go unnoticed.

This talk will consist of a practical introduction to drift detection, the discipline focused on detecting such changes. We will start by building an understanding of how drift can occur, why it pays to detect it and how it can be detected in a principled manner. We will then discuss the practicalities and challenges around detecting it as quickly as possible in machine learning deployment settings where high dimensional and unlabelled data is arriving continuously. We will finish by demonstrating how the theory can be put into practice using the `alibi-detect` Python library.

Speakers

Oliver Cobb

Machine Learning Researcher, Seldon

What you'll learn

  • The common pitfalls of ML models
  • What is drift detection
  • How to use the Alibi-Detect python library

Watch the video