Trending keywords: security, cloud, container,
Kubernetes clusters are a fundamental building block of many modern cloud environments. At a high level, each cluster is a set of machines that work together to keep workloads available.
Keep reading for a deeper dive into how Kubernetes clusters work and how to install and manage them.
What Is a Kubernetes Cluster?
A Kubernetes cluster is a set of nodes that hosts workloads. The cluster also hosts Kubernetes itself, meaning it runs the Kubernetes control plane software.
Each node is an independent physical or virtual machine. (A cluster could consist of a mix of physical and virtual machines, although it’s more common to use only VMs or only physical servers.) Some nodes in a cluster operate as masters, meaning they host the Kubernetes control plane software that manages other nodes and workloads deployed on them. Other nodes are worker nodes, which host applications that admins want to run via Kubernetes.
Why Are Kubernetes Clusters Important?
Clusters are central to the functionality that Kubernetes provides.
At its core, the main purpose of Kubernetes is to provide a reliable, mostly automated means of managing applications that are deployed across multiple servers at once. Thus, although it’s possible to run Kubernetes on just a single server (meaning you would have a single-node cluster), you can’t really take advantage of its full functionality unless you have a cluster that consists of multiple nodes.
Multi-node clusters allow Kubernetes to schedule workloads in such a way that resource consumption is balanced between servers. They also let Kubernetes move workloads to a different node in the event that one node starts to fail or becomes unavailable.
How Many Nodes Are in a Cluster?
The total nodes running in a cluster can range from just one node to up to 5,000 (which is currently the official total node limit, although in practice it may become difficult to manage nodes reliably once you surpass about 500 total).
Single-node clusters are typically used only for experimental purposes. For example, if you are just getting started with Kubernetes and are using a lightweight distribution like Minikube, you’ll typically create a single-node cluster to start.
For a production environment, however, you’ll almost certainly want at least several nodes in your cluster, and possibly hundreds or thousands. The total number of nodes you should create for a cluster depends on how many workloads you need to run. Consider, too, how many resources those workloads will require, and how reliable each node is.
How to Set Up a Kubernetes Cluster
The exact process for setting up a Kubernetes cluster depends on which Kubernetes distribution you are using and where it is being deployed. In general, however, the steps include:
- Create your nodes: Start by setting up physical servers or virtual machines that will serve as nodes.
- Each node needs to be provisioned with an operating system, which in most cases should be some form of Linux.
- Each node also requires a container runtime, which is the software that executes containers (and that, by extension, allows you to run pods).
- Deploy the Kubernetes control plane: On your master node or nodes, deploy the components of the Kubernetes control plane – the API server, the scheduler, the Etcd key-value store, and so on.
- Join worker nodes to your cluster: Add worker nodes to your cluster by connecting them to your master node. Typically, you would do this using the kubeadm join command on whichever nodes you want to connect.
The Easiest Way to Install a Kubernetes Cluster
If those steps sound intimidating, it’s because they are. Setting up a production-grade Kubernetes cluster is a lot of work, even for seasoned admins.
Fortunately, however, there are ways to simplify cluster setup. You can either use a lightweight distribution that automates most node and cluster setup tasks, or run Kubernetes using a managed service in the cloud that automatically creates the cluster for you.
Let’s take a look at both of these approaches.
Setting Up Lightweight Kubernetes Clusters
For example, with Minikube, you can create a cluster with this command:
The command both creates and starts your cluster. By default, Minikube creates a single-node cluster; depending on which Minikube driver you use, the node runs either as a VM or as the same machine where you are running Minikube.
Setting Up Kubernetes Clusters in the Cloud
Most cloud-based Kubernetes services – such as Amazon Elastic Kubernetes Service, Azure Kubernetes Service, and Google Kubernetes Engine – can automatically set up clusters for you using cloud-based VM instances.
For example, to create a cluster in the Amazon cloud, run this command using Amazon’s Kubernetes CLI management tool, eks:
eksctl create cluster
By default, the command creates a two-node cluster. The nodes are EC2 instances running in whichever AWS cloud region you use by default.
You don’t have to use automated cluster setup in conjunction with cloud-based Kubernetes services. In most cases, you can create and manage nodes manually if you wish. However, taking advantage of automated cluster setup in the cloud is one of the easiest ways to get a cluster up and running quickly. The downside, of course, is that you have to pay for whichever cloud resources your nodes consume.
In its early days, Kubernetes was designed with the expectation that each environment would include just one cluster.
Today, however, there is increasing interest in multi-cluster Kubernetes setups. In multi-cluster Kubernetes, a single control plane manages more than one cluster. In other words, you can operate distinct clusters of servers, with each cluster isolated from the others, while relying on only one set of control plane software to manage them all.
The advantage of multi-cluster environments is that they provide maximum isolation between workloads. They minimize the risk that a security or stability issue in one cluster could spill over and disrupt workloads hosted in the other cluster.
In addition, multi-cluster setups can simplify administrative complexity for organizations that have distinct clusters of servers, and that want to centralize the management of them. For example, if you have multiple data centers, you could operate each data center as a distinct cluster while managing them all via one control plane. When done well, this setup would result in less administrative burden than having to deploy and manage a different control plane for each data center.
The downside of multi-cluster Kubernetes, however, is that it is considerably more difficult to set up and manage than single-cluster configurations. In particular, multi-cluster presents two main challenges:
- Establishing a consistent way to manage networking resources for distinct clusters. This is especially challenging if your clusters are in different physical sites, and therefore operating on totally different networks with different IP address ranges, routes, and so on.
- Finding a way to keep the control plane data synced across multiple clusters. This, too, is particularly challenging when you have clusters in different physical sites, because network latency can make it difficult for the control plane components that track each cluster to remain in perfect sync – and differences of even just fractions of a second could lead to problems if they cause the control plane to become unsure of the actual state of the overall environment.
These challenges can be adequately addressed using tools like Virtual Kubelet. Still, they pile a considerable layer of complexity on top of the complexity that is already inherent to Kubernetes. For this reason, multi-cluster setups are not for the faint of heart.
Unless you have a specific need for multiple clusters, a simpler approach is to configure multiple namespaces and use those to isolate workloads. Namespaces are much easier to set up and manage in Kubernetes than are multiple clusters.
There are many ways to set up and manage clusters in Kubernetes. If you’re new to Kubernetes and just want to experiment, consider spinning up a single-node cluster on your PC or laptop. For production workloads, however, you’ll want to plan your cluster architecture and total node count carefully, based on your workload requirements.