What is Kubernetes?

Kubernetes is an open-source platform used to orchestrate and manage containers and containerised applications. It allows for the automation of key elements of container management, including scaling, scheduling, monitoring and container replication. It’s used by a huge range of organisations and developers to streamline the management of containerised workloads.

Kubernetes was originally developed by Google and was initially announced in 2014.  It has evolved from code used by Google to manage its data centres and is heavily influenced by Google’s cluster manager Borg. Kubernetes version 1.0 was released in 2015 as an open-source seed technology. It is now managed by the Cloud Native Computing Foundation (SNCF), a part of the Linux Foundation. The SNCF was founded alongside the initial release of Kubernetes. Kubernetes is written in Go, a programming language developed by Google.

Today, Kubernetes powers many platforms as a service or infrastructure as a service solution and is one of the most popular container management systems. It has a diverse and active community as an open-source platform and is often abbreviated to ‘K8s’ on account of the eight letters between the K and S.

This guide explores what Kubernetes is, the basics of containers and components, the benefits it brings to developers, and the ways it is used today.

What are containers?

Kubernetes is a container orchestration platform, so it’s useful to first understand exactly what a container is. Containers are a type of operating system visualisation and are used as an environment to develop and deploy applications.  Each container can deploy in isolation from the overall operating system and wider infrastructure. This means containers can run in a range of environments, and effectively move within different cloud platforms and local servers.

The container includes all elements of code needed for an application to function properly. It means a consistent environment regardless of the operating system or wider infrastructure. Containers mean the same consistent environment can be developed for wherever it’s located, from local servers to the cloud.

Containers are considered lightweight and not as resource intensive as traditional virtual operating systems. Developers often utilise numerous containers for application architecture, with distinct parts of the software within different containers. Each can be updated or deployed in a piecemeal way, instead of having to update the whole application. This means less overall downtime but can mean a huge array of containers that need to be managed, especially within a complex system.

That’s where Kubernetes comes in. It’s a platform for operating and managing containers, helping developers build, scale and monitor containerised services.

What is Kubernetes used for?

The use of containers in application development is becoming increasingly popular because of its scalability and elasticity. Modern containerised applications are increasingly more complex and may be deployed across different servers and environments. Operating and managing these containerised applications becomes more and more resource-intensive.

Kubernetes is used to streamline and automate the management of these containers, controlled through an open-source API. The platform is used to orchestrate clusters of containers, maintaining and scaling resources to achieve the desired state. Kubernetes automates tasks like load balancing, allocation of containers, scaling of resources, and the replication of containers. This lowers the demand on developers, who can concentrate on the software and applications.

The use of Kubernetes is widespread, and may include:

  • Automating web server provision, scaling up and down depending on demand.
  • Cloud web server hosting and management. Many of the main public cloud hosting companies like AWS, Google and Azure support Kubernetes.
  • Running web or mobile software and applications in any environment.
  • Deployment and development of containerised applications.
  • Containerised web servers as part of a data centre.
  • Machine learning development and deployment.
  • High performance computing.

Kubernetes and machine learning

Kubernetes is valuable to organisations researching and developing machine learning and artificial intelligence solutions. The open-source platform has a toolkit specifically for machine learning called Kubeflow. Portability, scaling, security and scheduling are core Kubernetes features that make the platform attractive to machine learning developers.

Machine learning often requires high levels of resource, so the scalable quality of containerised development is valuable. Developers can harness GPU acceleration within containers, powering experiments and machine learning training at scale.

What are the main Kubernetes components?

Kubernetes orchestrates clusters of containers and is managed through developer input via an API connected to the command plane. Kubernetes consists of different distinct components which work together to achieve the desired system state.

The basic architectural components of Kubernetes include:

  • A Kubernetes pod, which is the smallest unit. A pod is a group of one or more containers.
  • A Kubernetes node that contains everything needed to run an application container.
  • A Kubernetes cluster that groups the nodes and containers that make up the application.

What is a Kubernetes pod?

The smallest unit of deployment is a Kubernetes pod, which is a group of one or more containers. Pods are managed through the API or automatically orchestrated from the control plane. The deployment file will outline pod configuration.

What is a Kubernetes node?

Kubernetes nodes, workers, or minions are container hosts and contain everything needed to run an application container. Each node contains a Kubelet, a node agent which reports on the running state of the pods within the nodes to the control plane. The kubelet communicates with the cluster controller and can also start and stop containers within the pods through the control plane.

What is a Kubernetes cluster?

A Kubernetes cluster is the group of nodes and containers that make up the application. It is a term for the overall group of nodes running containerised software or applications. The cluster includes everything needed to run the application. Clusters are controlled by the command plane, where developers can configure cluster services through the API.

The main controlling part of the cluster is the control plane and a master node (separate from worker nodes). The master node controls the cluster state and task assignments. The desired state will often define the application and the resources required to run it. Developers interact with the master node through the Kubernetes API within the control plane. Configuration includes cluster workload and activities, cluster state and task assignments. The Kubernetes cluster can be deployed across virtual or physical machines.

Kubernete clusters are made up of different components including:

  • An API as the interface for the control plane.
  • A scheduler which monitors idle pods and schedules resources to run on nodes when required.
  • A controller manager which measures and controls the intended state of the cluster and controls other elements like nodes and endpoint controllers.
  • The Kubelet which monitors and manages containers within pods and ensures they’re healthy and functional.
  • The Kube-proxy ensures network connection and rules within nodes.
  • Etcd which is the storage of cluster data.

Benefits of Kubernetes

Kubernetes manage the orchestration of containerised applications and software and bring a range of benefits to the process. Kubernetes is one of the most popular container management platforms today, and for good reason. As an open-source platform with an invested community, Kubernetes continues to evolve and grow.

The main benefits of Kubernetes include:

  • Portability of containers over different environments and machines.
  • Scalability of resources and capacity.
  • A strong open-source community.
  • Automation of container control increases efficiency.
  • In-built security including secure storage of sensitive information

Portability of containers

Kubernetes makes it straightforward to manage containers across different environments and machines. It provides an avenue for achieving a consistent state or service across different servers or machines. This ensures application performance is consistent and portable across servers and can be moved between cloud environments or local servers.

Scalability of services

Kubernetes can make scalability straightforward, automating the attribution of extra resources when demand increases. Kubernetes actively monitors the current state of containers against the desired state, and will automatically scale to meet surges in demand.

This is particularly useful for web and mobile applications and container-based web servers. For example, applications can have elastic scalability based on web traffic changes. Complex applications can be deployed across multiple local or cloud servers, which makes scaling services more cost-efficient.

A strong community

Containers are becoming a mainstay of application development and delivery, bringing flexibility and efficiency to the development process. As it has been adopted by so many companies and developers, the platform has grown organically backed up by a community of experts. Kubernetes is one of the most popular container orchestration platforms and has an active community that builds and improves capabilities and extensions.

Automation and efficiency

Kubernetes automatically manages clusters of containers. The platform maintains storage, monitors container health, automatically scales resources, and maintains networking. It removes the need for much of the manual management of container operations within scaling and deployment. This frees up resources and time for the development team.

Automating the control of workload performance makes maintenance much more straightforward. Containerised applications can mean less downtime too, as specific nodes or parts of the overall services can be updated or amended without having to take down the whole application.

In-built security

Kubernetes has an in-built way of storing sensitive data and information, in the form of a Kubernetes ‘Secret’. It is a secure storage environment that provides sensitive information to the system when required but isolates it by default.

Kubernetes secrets can securely store information like passwords or SSH keys. Pods can access the information contained within Secrets when required to function but is more secure than storing the sensitive information within the pod. There are different built-in types of Secret with varying usage constraints or required credentials.

Machine learning deployment for every organisation

Seldon moves machine learning from POC to production to scale, reducing time-to-value so models can get to work up to 85% quicker. In this rapidly changing environment, Seldon can give you the edge you need to supercharge your performance.

With Seldon Deploy, your business can efficiently manage and monitor machine learning, minimise risk, and understand how machine learning models impact decisions and business processes. Meaning you know your team has done its due diligence in creating a more equitable system while boosting performance.

Deploy machine learning in your organisations effectively and efficiently. Talk to our team about machine learning solutions today.

Contents