We hosted six MLOps LDN meet ups in 2022, with a huge range of topics across the sessions. Missed one of the talks this year? No problem, here’s a summary of what we covered in each session:
We kicked off the year with a January meetup with talks from Jan Teichmann from The Trainline and Matt Squire from Fuzzy Labs. Jan spoke on the importance of getting the right models to the right places in order to deliver the last mile of data science. Matt conducted a deep dive into the crowded field of MLOps tools, breaking down the major building blocks of MLOps infrastructure.
MLOps London returned again in March with more talks on production machine learning, DevOps and Data Science. The session kicked off with Magdalena Konkiewicz who is a Data Evangelist at toloka.ai and discussed the shift from ‘model-centric’ to ‘data-centric’ artificial intelligence in order to solve bottlenecks in machine learning. She dives into an example of an experiment where the algorithm freezes, allowing only data to be manipulated and discusses scalable and efficient solutions to data annotation.
Sean Robertson from Mesh AI then gave the talk ‘Moving ML into Production is Difficult: Trials and Tribulations from the trenches’, where he dove into the advantages of exploiting the gap in intelligent decisions at the application layer in order to leap ahead of competitors, provide stand-out service to customers and adapt rapidly to market conditions.
The May edition of MLOps London featured Moritz Meister from Hopworks and Alejandro Saucedo from The Institute for Ethical AI.
In his talk ‘Fresh Online Feature Stores need high performance Reverse ETL pipelines’ Moritz described how Hopsworks implemented a high throughput, low latency reverse ETL pipeline with Streaming Applications to the Hopsworks Online Feature Store. He also addresses the challenges of ensuring schema consistency and consistent stable update rates between data warehouses, an intermediate Kafka Cluster and the operational database RonDB.
Alejandro dove into the importance of secure ML and automated security best practices when approaching machine learning deployment. He uses a practical examples that allow data science practitioners to adopt these best practices in their daily workflows to ensure a relevant level of security is present in the multiple stages of the machine learning lifecycle.
MLOps London returned in July with more talks on production machine learning, DevOps and Data Science. Alex Persin, Senior Software Engineer at Wayve presented on ‘cloud cars’, and what his team learnt running petabyte scale inference for autonomous vehicles in the cloud. Seldon’s Developer Advocate Ed Shee then sat down for a fireside chat on enterprise MLOps with Contino’s AI+ML Practice Lead, Byron Allen.
In the September edition of MLOps London, Paolo Ambrosio and Jonas Mende from Sky presented the ‘love story’ between Continuous Training and Continous Delivery. Sky’s global-scale platform is deployed using a state-of-the-art Continuous Delivery pipeline. This includes the Machine Learning models that power Personalisation. The team introduced the KFP Operator: an open-source tool that Sky has developed to bridge the gap between Continuous Training and Continuous Delivery on Kubeflow Pipelines.
The second presentation was from Laszlo Sragner, Founder of Hypergolic who presented on the concept of ‘clean architecture’ and how to structure your ML projects to reduce technical debt. He presented a framework that enables practitioners to structure their projects and manage changes throughout the product lifecycle at low effort.
In the final meetup of the year, MLOps London hosted talks from Daniel Geater from Qualitest and MLOps engineer, Neven Miculinić.
Daniel presented on the challenges of testing in ML and AI systems. As AI & ML-infused technology is more widespread and organisations aim for increasingly faster delivery, those testing challenges are becoming more apparent and need to be tackled to preserve velocity and risk massive increases in production issues and reduced system reliability. Daniel Geater demonstrates how to establish left-shifted and right-shifted quality assurance to enable confident, rapid releases of ML Models by combining automation, Data Science and Quality Engineering best practices.
Neven’s presentation, titled ‘Everything is a Remix – reflection on the last 5 years in my career’ examined the evolution of MLOps in the last five years, as well as the best practices and various other lessons he has learned.
Interested in attending or watching our next meetup? You can register here for the next session on Tuesday 10th January where we’ll be hosting talks on distributed training and GPU inference with guest speakers from Paul Hetherington at Mystic.ai and Uroš Lipovšek from AWS.