MLOps London July: Talks on AI Regulation and Production-Grade Pipelines

The MLOps London July meet-up featured two captivating talks, each addressing crucial aspects of the AI journey. One of them delves into the concept of future-proofing AI for regulation through MLOps, highlighting the challenges and barriers to AI adoption and how MLOps, integrated into the AI lifecycle, can alleviate them. The second one provides insights into the efficient and secure construction of MLOps pipelines by harnessing the capabilities of Kubernetes services with a focus on security and cost optimization.

Future Proofing AI for Regulation: an MLOps Approach

The first talk at the meet-up, delivered by Chris Jefferson, presented a future-proofing approach for AI regulation. This method involves aligning MLOps with business and regulatory principles while achieving innovation targets and tracking relevant metrics. 

In addition to model accuracy, it is crucial for data scientists and ML engineers to consider regulatory risks, robustness risks, and public backlash risks. Each stakeholder within the organization has specific concerns, with data and tech teams overseeing model capabilities, authorisation, and compliance, risk and compliance managers assessing risk management, and C-suite employees focusing on business needs and KPIs.

AI systems that fail to align with regulatory principles are halted in their steps – a recent case study being the banning of ChatGPT in Italy due to GDPR issues. The underlying principles of ‘Responsible AI’ development include risk management (judged through impact-based assessment), reliability (trustworthiness, understandability, interpretability), accountability (traceability, documentation, compliance), security (mitigation of security threats) and human centricity (similar data quality and ethics across all). 

In accordance with these principles, nations have started to build a regulatory landscape based on their standards – Horizontal legislation (EU), Agile (UK – domain based), Vertical (US – State based). 

Chris and his team at Advai, research where models go wrong instead of simply aiming to maximise accuracy. Each regulatory measure is viewed through the lenses of compliance, risk and harm for appropriately sectioning each aspect of the AI development process. This framework captures risk and delegates stakeholders throughout the AI lifecycle, identifies KPIs and metrics for audit and approval and enables successful deployments. The use of the MLOps lifecycle can further help overcome any friction points, thus creating a breeding ground for the emergence of AI standards and hubs. 

Some essential tips to ensure a smooth functioning of this approach: 

  • Understand the use-case and context to determine human impact
  • Assign a level of risk to gauge the potential impact of the completed AI model and its use-case
  • Track metrics aligned to use-case deployment (not just model accuracy)
  • Incorporate robustness and resilience for out-of-distribution from model inception
  • Design for security, incorporate cyber practices

Designing Production Grade MLOps Pipeline using KubeFlow

Srivalsan Mannoor Sudhagar presented his talk on designing an efficient production grade MLOps pipeline by leveraging Kubernetes services through KubeFlow. Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. Containers are like individual compartments for your applications, ensuring they run consistently across different environments. 

Kubeflow is an open-source ML platform that enhances Kubernetes’ capabilities, simplifying ML workload deployment and management. With pre-configured components like Jupyter notebooks, TensorFlow, and PyTorch, it enables easy deployment on Kubernetes clusters. ML tasks can efficiently scale based on demand, utilizing Kubernetes’ resource management. Kubeflow enables reproducible ML pipelines through a declarative approach, allowing version control and easy experiment sharing. 

It also facilitates experiment tracking, model versioning, and serving, making it seamless to deploy trained ML models as Kubernetes services. This fosters collaboration among data scientists, ML engineers, and DevOps teams, prioritizing model development and task management. Noteworthy features include Fairing for ML model building and deployment, Hyper-parameter tuning enabled by Katib, and Pipelines offering UI, engine, and sdk for ML workflow management.

Community Networking

The event came to a delightful close with attendees enjoying food and drinks courtesy of Seldon. Heartfelt gratitude to Rise (by Barclays) for generously providing the event space, creating a perfect setting for knowledge sharing and networking. The ample networking opportunities in between talks allowed participants to connect and exchange valuable insights. Lastly, a special thanks to Ed Shee for organizing everything seamlessly, ensuring a successful and enriching experience for all involved.

Contents