Modern software architecture relies on cloud native, containerized, distributed applications on
cloud, virtualized, bare metal, and even edge infrastructure. Containerized applications use
resources more efficiently, run in a broad variety of environments, and make it easy to scale up
and down dynamically. As applications scale and become more complex, automation and
orchestration become ever more important.
Kubernetes is the dominant container orchestration tool and continues to strengthen its hold in the
enterprise as companies deploy increasing numbers of clusters across a variety of environments.
Kubernetes operationalizes the management of containerized applications, bringing consistency
across applications and environments—once the cluster is set up. But setting up and maintaining
a Kubernetes cluster is itself a complicated proposition, especially at scale, and the challenges can
differ from one environment to another.
The Cloud Native Computing Foundation’s Kubernetes Cluster Lifecycle special interest group
(SIG) created Cluster API to solve the complex problems of Kubernetes cluster lifecycle
management across environments. Cluster API takes its cue from Kubernetes itself, providing
declarative management (also known as “desired-state management”) capabilities via a
management cluster that oversees the operation of worker clusters. Cluster API controllers manage
Kubernetes infrastructure as objects in the Kubernetes API.
As Kubernetes continues to dominate, organizations will have an increasing need to manage the
growing complexity of larger numbers of deployments, often spanning multiple infrastructure
environments. The following chapters outline the challenges of managing Kubernetes and how
Cluster API can help.
Chapter 1. Why Kubernetes Adoption Is
Complex
Modern application design has moved from the creation of huge monoliths to a more flexible
architecture based on microservices running in containers. Containers are small runtime
environments that include the dependencies and configuration files the services need to run.
Containers are the building blocks of the cloud native approach, enabling scalable applications in
diverse environments, including public, private, and hybrid clouds, as well as bare metal and edge
locations.
Beyond the significant advantage of empowering application development teams to work in
parallel on different services without having to update the entirety of an application, the cloud
native model offers a number of advantages over monolithic architecture from an infrastructure
, perspective. Containerized applications use resources more efficiently than virtual machines
(VMs), can run in a broader variety of environments, and can be scaled more easily. These
advantages have driven wide adoption of microservice-based architecture, containers, and the
predominant container orchestration platform: Kubernetes.
Kubernetes facilitates the management of these distributed applications, allowing you to scale
dynamically both horizontally and vertically as needed. Containers bring consistency of
management to different applications, simplifying operational and lifecycle tasks. By orchestrating
containers, Kubernetes can operationalize the management of applications across an entire
environment, controlling and balancing resource consumption, providing automatic failover, and
simplifying deployment.
Although Kubernetes provides a foundation for resilient and flexible cloud native application
development, it introduces its own complexities to the organization. Running and managing
Kubernetes at scale is no easy task, and the difficulties are compounded by the inconsistencies
between different providers and environments.
Kubernetes Architecture
Kubernetes manages a cluster of physical or virtual servers, called worker nodes, each one of
which hosts containers organized into pods. A separate, smaller number of servers are reserved
as control plane nodes that make up the control plane for the cluster. To support multitenancy, a
Kubernetes cluster offers logical separation between workloads using namespaces—a mechanism
for separating resources based on ownership—to provide a virtual cluster for each team.
The control plane is the main access point that lets administrators and others manage the cluster.
The control plane also stores state and configuration data for the cluster, tells worker nodes when
to create and destroy containers, and routes traffic in the cluster.
The control plane consists mainly of the following components:
API Server
The access point through which the control plane, worker agents (kubelets), and
users communicate with the cluster
Controller manager
A service that manages the cluster using the API server using controllers, which bring
the state of the cluster in line with specifications
etcd
A distributed key-value store that contains cluster state and configuration