Switching from containerd to cri-o

The last 3 days I tried to install kubernetes 1.25.4 on a Debian 11 (Bullseye) box without success. The problem was that the kubeadm init process always hangs with the message:

....
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1121 08:17:12.320743    8096 round_trippers.go:466] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.25.4 (linux/amd64) kubernetes/fdc7750" 'https://x.x.y.y:6443/healthz?timeout=10s'
I1121 08:17:12.321047    8096 round_trippers.go:508] HTTP Trace: Dial to tcp:x.x.y.y:6443 failed: dial tcp x.x.y.y:6443: connect: connection refused
....

Even as I tried several different tutorials and guidelines I failed to solve this issue. (See also here)

Using cri-o instead of containerd…..

Kuberentes supports different Container Runtimes. containerd is only one of them. Maybe containerd and Debian 11 are not the best friends. I don’t know…

cri-o is an alterative lightweight Container Runtime for Kubernetes. After I switched from containerd to cri-o everything worked like a charm. So here is my short guideline how to install cri-o on a fresh Debian 11 box.

Note: If you have already installed containerd you need to remove it first!

Install cri-o on Debian 11

As usual for kuberentes first make sure that you have enabled the necessary enable kernel modules and setup the iptables:

$ sudo modprobe overlay
$ sudo modprobe br_netfilter

$ sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

$ sudo sysctl --system

Next you need to add the repositories maintained by opensuse.org:

$ sudo -i
$ OS=Debian_11
$ VERSION=1.23

$ echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list

$ curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -
$ curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -

Now you can start installing cri-o from the new repository:

$ sudo apt update
$ sudo apt-get install -y cri-o cri-o-runc
$ sudo apt-mark hold cri-o cri-o-runc

# Start and enable CRI-O
$ sudo systemctl daemon-reload
$ sudo systemctl enable crio --now

That’s it. To verify if your cri-o runtime is up and running call:

$ sudo systemctl status crio
● crio.service - Container Runtime Interface for OCI (CRI-O)
     Loaded: loaded (/lib/systemd/system/crio.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-11-21 15:14:44 UTC; 21min ago
       Docs: https://github.com/cri-o/cri-o
   Main PID: 2372 (crio)
      Tasks: 12
     Memory: 770.4M
        CPU: 28.268s
     CGroup: /system.slice/crio.service
             └─2372 /usr/bin/crio

Nov 21 15:18:12 master-1 crio[2372]: time="2022-11-21 15:18:12.035184530Z" level=info msg="Created container 0878bbfc957e8a7fb069b83a9101c9386d0bee5ea14c10c81ea0379c14cd3240: kube>
Nov 21 15:18:12 master-1 crio[2372]: time="2022-11-21 15:18:12.038925952Z" level=info msg="Starting container: 0878bbfc957e8a7fb069b83a9101c9386d0bee5ea14c10c81ea0379c14cd3240" id>
Nov 21 15:18:12 master-1 crio[2372]: time="2022-11-21 15:18:12.049779604Z" level=info msg="Started container" PID=9385 containerID=0d50842810cb5a0632b137a16e1d29845f4dc3cb9e8e8fc3>
Nov 21 15:18:12 master-1 crio[2372]: time="2022-11-21 15:18:12.063878077Z" level=info msg="Started container" PID=9383 containerID=0878bbfc957e8a7fb069b83a9101c9386d0bee5ea14c10c8>
Nov 21 15:22:56 master-1 crio[2372]: time="2022-11-21 15:22:56.803994649Z" level=info msg="Checking image status: registry.k8s.io/pause:3.8" id=2521c53d-4b83-4b95-8501-363d83ac149>
Nov 21 15:22:56 master-1 crio[2372]: time="2022-11-21 15:22:56.804517064Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4873874c08efc72e9729683a83ffbb7502ee7>
Nov 21 15:27:56 master-1 crio[2372]: time="2022-11-21 15:27:56.810307321Z" level=info msg="Checking image status: registry.k8s.io/pause:3.8" id=4b38e75e-8e03-4007-8554-323eb6c404a>
Nov 21 15:27:56 master-1 crio[2372]: time="2022-11-21 15:27:56.810772581Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4873874c08efc72e9729683a83ffbb7502ee7>
Nov 21 15:32:56 master-1 crio[2372]: time="2022-11-21 15:32:56.814927245Z" level=info msg="Checking image status: registry.k8s.io/pause:3.8" id=ba81dd6e-491d-43ed-a114-5b69a980569>
Nov 21 15:32:56 master-1 crio[2372]: time="2022-11-21 15:32:56.815618456Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4873874c08efc72e9729683a83ffbb7502ee7

Now you can start init your kubernetes cluster:

$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 

kubectl get nodes error: You must be logged in to the server

Today I got the following error message when trying to run kubectl on my Kubernetes Cluster:

$ kubectl get pods
error: You must be logged in to the server (Unauthorized)

This issue can happen after renewing kubernates certificates and is caused the existing ~/.kube/config to have outdated keys and certificate values in it.

Kubernetes is renewing the certificates automatically and so you need to update your local copy too. You can check the status of your Kubernetes server certificate with:

$ openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ' Not '
            Not Before: Jan  8 21:13:17 2021 GMT
            Not After : Nov 13 14:46:01 2022 GMT

Running kubectl on a server you can simply renew your .kube/config file with the latest one from your server:

$ cp .kube/config .kube/config_old
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Monitoring Your Kubernetes Cluster the Right Way

Monitoring a Kubernetes cluster seems not to be so difficult as you look at the hundreds of blogs and tutorials. But there is a problem – it is the dynamic and rapid development of Kubernetes. And so you will find many blog posts describing a setup that may not work properly for your environment anymore. This is not because the author has provided a bad tutorial, but only because the article is maybe older than one year. Many things have changed in Kubernetes and it is the area of metrics and monitoring that is affected often.

For example, you will find many articles describing how to setup the cadvisor service to get container metrics. But this technology has become part of kubelet in the meantime so an additional installation should not be necessary anymore and can lead to incorrect metrics in the worst case. Also the many Grafana boards to display metrics have also evolved. Older boards are usually no longer suitable to be used in a new Kubernetes environment.

Therefore in this tutorial, I would like to show how to set up a monitoring correctly in the current version of Kubernetes 1.19.3. And of course also this blog post will be outdated after some time. So be warned 😉

Continue reading “Monitoring Your Kubernetes Cluster the Right Way”