Seldon’s Alejandro Saucedo features in this article which explores the ways in which Kubernetes enhances the use of machine learning (ML) within the enterprise.
Machine learning (ML) is becoming a commonly implemented tool for easing the workloads of employees within various areas, from cyber security to customer service. However, this can bring its own drain on resources. A possible solution to this, which can bring additional benefits, is the open source containerisation technology, Kubernetes.
In this article, five experts in the space explore how Kubernetes extends to ML, allowing development teams to get better results.
Five compatible capabilities
“To apply the flexibility of cloud-native development and infrastructure to machine learning applications, Kubernetes comes with five powerful capabilities: scalability, GPU support, multi-tenancy, data management and infrastructure abstraction, which makes it a favorite tool for data scientists to take ML to production,” said Raina.
“Kubernetes supports GPUs today, which accelerates the AI workflow and automates the management of GPU-accelerated application containers. These tools enable ML teams to leverage the speed of GPUs within a containerised workflow.
“Kubernetes also helps in implementing infrastructure abstraction allowing data scientists with a layer of abstraction to these services without worrying about the infrastructure underneath it. As more groups look to leverage machine learning to make sense of their data, Kubernetes makes it easier for them to access the resources they need.
“ML is composed of a diverse set of workloads managed by separate teams. Kubernetes offers the ‘namespaces’ feature, which enables a single cluster to be partitioned into multiple virtual clusters. Finally, Kubernetes provides a single access point for diverse data sources and manages the volume lifecycle, enabling teams to provision exactly the cloud-based storage they require while reducing complexity.”
Enhancing the open source market
The scalability of Kubernetes, alongside the flexibility of ML, can allow developers within the open source space to innovate without experiencing strain on their workloads.
Thomas Di Giacomo, president of engineering and innovation at SUSE, explained: “Kubernetes and cloud native technologies enable a broad selection of applications because they serve as a reliable connecting mechanism for a multitude of open source innovations, ranging from supporting various types of infrastructures and adding AI and ML capabilities to help make developers’ lives simpler and business applications more streamlined.
“Kubernetes facilitates fast, simple management and clear organisation of containerised services and applications. The technology also enables the automation of operational tasks, like, application availability management and scaling.
“There’s no denying that AI and ML technologies will have a massive impact on the open source market. Developed by the community, AI open source projects will help to develop and train ML models, and will provide a powerful feedback loop that will enable faster innovation.
“We have already witnessed that at SUSE, and having been working and developing AI ML solutions together with Kubernetes to streamline their use by data scientists who can then focus on their own needs and processes rather than the mechanics.”
Advancement of intelligence
Justin Bercich, head of AI at Lucinity, expanded on the notion of accelerated innovation using a combination of Kubernetes and ML, explaining how democratisation can lead to advancement of intelligence.
“It’s no secret that technology, as a whole, has become more available, accessible, and democratised,” said Bercich. “One of the main reasons AI and ML have been able to continue their relentless march is because of specific open-source mathematical software such as Tensorflow (deep learning) and Kubernetes (distributed computing), which have made data science infinitely more efficient and effective. The more people that become fluent in Tensorflow and Kubernetes, the more ideas and innovations will flow and flourish, and the more advanced AI and machine learning will become.
“It means machine learning pipelines can now be industrialised, operationalised, and commercialised. Traditionally, machine learning was a process that took place offline, with models updated using data outside production. Now, the machine learning pipeline is built on algorithms and models that learn efficiently as data flows through the system. Brands who have cracked this ‘deep learning’ code will understandably keep their cards close to their chests, because it’s so valuable.
“In a nutshell, machine learning is beginning to increasingly resemble a conveyor belt. You receive data, you make transformations, you make a prediction, and then you learn from it. Your machine brain is always learning from new insights given to it, just like a human brain.”
Secure and scalable workflows
While ML processes alone can be demanding for resources, workflow can be secured and set up at scale with the help of Kubernetes.
“Machine learning can be a very resource-intensive activity, whether it be the process of building a machine learning model or deploying and executing it in a production environment,” said John Spooner, head of artificial intelligence EMEA at H2O.ai.
“Kubernetes can meet many of the computational challenges by orchestrating this workflow through containers. It can be the single platform of choice for IT that provides a scalable and secure way of deploying software in a unified manner. It allows data scientists scalable access to CPUs and GPUs that automatically increases when the computation requires a burst of activity and scaled back down when finished.
“It also provides monitoring and governance capability for IT to make sure every everything is working correctly and the capacity of the environment is optimised.”
Raina added: “The ML pipeline is resource-intensive because the data preparation requires consistent, intensive computation. Kubernetes makes this process easier to orchestrate. Now users can allocate additional resources as needed simply by adding more physical or virtual servers to their clusters.”
A final point to discuss on the possible relationship between Kubernetes and ML is that it can bolster an emerging member of the DevOps family: MLops.
Alejandro Saucedo, engineering director at Seldon, said: “The relationship between Kubernetes and ML is that the former serves as a fantastic enabler of the latter. Kubernetes has proven to enable the deployment of thousands of microservices across organisations, which makes it an ideal technology for businesses looking to deploy ML models on a large scale across a range of business units.
“Kubernetes is well-suited for operational teams, enabling them to monitor and observe ML models at massive scale. Kubernetes allows for cloud-native “MLops”, which is essentially defined as an extension of DevOps with ML treated as a first class citizen. This means teams can adopt processes like the continuous integration/continuous deployment (CI/CD) paradigms when it comes to ML models, which greatly improves efficiency in maintenance and operations of large-scale AI systems.
“Thus, Kubernetes is key to the future of ML: it allows teams to deploy multiple versions of thousands of models across several environments, whilst also introducing MLops capabilities that make such scale manageable. Kubernetes is a step-change for speeding up the deployment of ML models and improving the reliability of those deployments.”