Seldon at IBM Think 2021

Last week, Seldon’s CTO Clive Cox and Director of Machine Learning Alejandro Saucedo spoke at IBM’s annual conference, IBM Think 2021, which brought together developers, operations teams, and executives from across technology to focus on issues relating to hybrid cloud and AI.

In their talks, Clive and Alejandro explored the technical nuances of machine learning deployment and discussed the principles that should underpin trustworthy and reliable machine learning.

Using KFServing and Seldon on OpenShift to deploy models

In his talk, Clive sat down with Animesh Singh, the CTO of IBM’s Watson Data and AI OSS Platform, to discuss the topic of how teams should approach advanced machine learning deployment. To do so, they explored two routes organisations could deploy their machine learning models via Kubernetes – Kubeflow Serving (KFServing), and Seldon. 

Animesh and Clive talked through the basics of the KFServing stack, how it can support both default and canary configurations, how to set up its inference control plane, and the nuances of deployment. Then Clive spoke about Seldon’s product stack, and how it addresses the various challenges that come with ML deployment: deploying, scaling, monitoring, explaining, and analysing model performance.

After talking through Seldon’s technology and how it can enable model deployment at scale, Clive gave a live demonstration of Seldon in action. Along with showing the straightforward process of model deployment via Seldon, Clive also explored Seldon’s dashboard for managing models, how teams could monitor the resources allocated to models and their ongoing performance, and how Seldon can be used to explain a model’s behaviour and detect concept drift. 

Animesh and Clive also talked through how teams could install KFServing and Seldon on Red Hat OpenShift, the popular open source container orchestration platform. With both KFServing and Seldon available on the Red Hat marketplace, this makes it extremely easy for any team working on cloud-native architecture to deploy their machine learning models. To watch Animesh and Clive’s talk in full, you can view the full session here.

The principles of trusted AI

In another session, Alejandro joined François Jezequel and Souad Ouali from Orange to introduce the eight principles for trusted AI devised by the Linux Foundation’s AI and Data Foundation (LF-AI). Having been developed by a LF-AI working group, the new RREPEATS framework intends to provide a set of simple, universally applicable, and easy-to-understand principles that can inform the development and deployment of machine learning models.

In their talk, Alejandro, François, and Souad explored each of the RREPEATS principles:

  • Reproducibility: the ability of an independent team to replicate the experiences or results of an AI and reach the same conclusions, using the same methods, data, software, codes, algorithms, models, and documentation.
  • Robustness: the ability of an AI to perform in a secure manner with meaningful safeguards to prevent the alteration of a system through purposeful tampering or the shifting of conditions away from the original assumptions a system operated under.
  • Equitability: the ability of those behind an AI take deliberate steps – in the AI life-cycle – to avoid intended or unintended bias and unfairness that would inadvertently cause harm.
  • Privacy – the ability of an AI system to guarantee privacy and data protection throughout a system’s entire lifecycle. The lifecycle activities include the information initially collected from users, as well as information generated about users throughout their interaction with the system.
  • Explainability: the ability to describe how an AI works and makes decisions.
  • Accountability: the ability by an AI and people behind it to explain, justify, and take responsibility for any decision and action made by the AI. 
  • Transparency: the existence of disclosure around AI systems to ensure that people understand AI-based outcomes. Whenever relevant, users should be aware when and how they are interacting with an AI and not a human being.
  • Security:  the securing of an AI system and those that interact with it, with the safety of an AI tested and assured across its entire life-cycle within an explicit and well-defined domain of use.

To view Alejandro, François, and Souad’s talk in full, you can watch the full session here.

Contents