- When installing with kubeadm, the kubelet is managed as systemd service. The service
configuration file is stored at /usr/lib/systemd/system/kubelet.service.d/
- Once running, it applies any manifest found in /etc/kubernetes/manifests.
Log files locations
Cluster installed with systemd
- journalctl –u kubelet
Cluster not installed with systemd
- /var/log/kube-*.log
Container logs: /var/log/containers
Pod logs: /var/pods/logs
Control plane components (manifests in /etc/kubernetes/manifests)
1. Kube-api-server
a. Configuration file specifies IP ranges for pods and services
2. Etcd.
a. Version can be retrieved directly querying the binary from inside the pod (etcd –
version)
3. Kube-controller-manager
4. Kube-scheduler
To stop the control plane components, manifests can be temporarily copied from here to another
folder.
Worker node components
1. Kubelet (system service)
a. Accepts pod specs
b. Runs static pods. Static Pods are managed directly by the kubelet daemon on a
specific node, without the API server observing them. Unlike Pods that are managed
by the control plane (for example, a Deployment); instead, the kubelet watches each
static Pod (and restarts it if it fails). The folder where static pods are expected to be
stored in specified in the kubelet configuration, at /var/lib/kubelet/. Static
pods running on a node have the node name as suffix.
c.
d. Kubelet client-side TLS certificate is stored as /var/lib/kubelet/pki,
together with the server-side certificate and key.
2. Kube-proxy (daemonset running in the kube-system namespace)
a. Manages the container network through iptables
b. Monitors services and endpoints, using random ports to expose pods
,Misc
- InitContainers run before any other pod container; other containers are started in parallel
- Containers of the same pod can communicate through loopback interface
Cluster configuration
Install the control plane
Take away:
- Install kubeadm, kubectl and kubelet.
- Reset existing installations if needed
- Add other nodes after installing network plugin
- Kubeadm, kubectl, kubelet are necessary to initialize a control-plane node. They must have
the same version. Their versions can then be pinned with sudo apt-mark hold
<package_name>
- Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"
- A basic kubeadm yaml configuration can be submitted to initialize a cluster kubeadm
init –-config=<config-file> --upload-certs –-node-name=cp
- Kubeadm config print –-init-default can be used to print the configuration
kubeadm would use without a configuration file.
- Kubeadm can also be used without any configuration file.
o kubeadm init --pod-network-cidr=<cidr> --apiserver-
advertise-address=<api-server-address>
o kubeadm init
▪ Will use:
• Pod network CIDR: not set (you must configure it via a network
plugin)
• API server bind port: 6443
• Default container runtime: containerd (if installed)
• Default image repository: registry.k8s.io
• Default Kubernetes version: latest stable installed on the system
o Kubeadm init automatically outputs the join command for a worker node
- The kubeconfig has to be copied to a .kube folder to the user home. Original file is stored at
etc/kubernetes/admin.conf
- kubeadm reset –-force should be run to wipe out previous cluster installation.
Join worker nodes
- Worker nodes need to be initialized after the CNI has been installed, otherwise will appear
in NotReady status
- Other worker nodes can be added with tokens generated by the control plane.
Kubeadm token create –-print-join-command
- A name for the new node can be added to this command.
, Join cp nodes
- This requires a stable controlPlaneEndpoint address (i.e. a load balancer).
- Create a new cp certificate.
kubeadm certs certificate-key
This is needed because cp certificates are tied to the nodes’ hostnames.
- if you want kubeadm to automatically copy the control‑plane certificates to the new node and an
upload it to kubeadm-certs.
kubeadm init phase upload-certs –-upload-certs
- Generate a new token with kubeadm
kubeadm token create –-print-join-command
- Execute the join command with –-control-plane and --certificate-key flags.
Submit the new key after the corresponding flag.
ETCD backup
Take away:
- Check cluster status
-take snapshot
-Stop kubelet
-restore snapshots
- Find the data directory of the ETCD configuration
grep data-dir /etc/kubernetes/manifests/etcd.yaml
- Check the database health
(kubectl –n kube-system exec –it etcd-controlplane – sh )
ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt
ETCDCTL_CERT=/etc/kubernetes/pki/etcd/server.crt
ETCDCTL_KEY=/etc/kubernetes/pki/etcd/server.key etcdctl member
list –w table (--endpoints=https://127.0.0.1:2379)
- Save an ETCD snapshot
ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt
ETCDCTL_CERT=/etc/kubernetes/pki/etcd/server.crt
ETCDCTL_KEY=/etc/kubernetes/pki/etcd/server.key etcdctl snapshot
save /var/lib/etcd/snapshot.db --endpoints=https://127.0.0.1:2379
- If any API servers are running in your cluster, you should not attempt to restore instances of
etcd. Instead, follow these steps to restore etcd:
o stop all API server instances (systemctl stop kubelet)
o restore state in all etcd instances
o restart all API server instances