Ceph Pacific running on Debian 11 (Bullseye)

In this tutorial I will explain how to setup a Ceph Cluster on Debian 11. The Linux Distribution is not as relevant as it sounds but for the latest Ceph release Pacific I am using here also the latest Debian release Bullseye.

In difference to my last tutorial how to setup Ceph I will focus a little bit more on network. Understanding and configuring the Ceph network options will ensure optimal performance and reliability of the overall storage cluster. See also the latest configuration guide from Red Hat.

Continue reading “Ceph Pacific running on Debian 11 (Bullseye)”

Kubernetes, Ceph and Static Volumes

Ceph is an open source distributed storage system which integrates with the concept of Kubernetes in a perfect way. With the Ceph CSI-Plugin you can connect a Ceph cluster into your Kubernetes cluster in a well designed way. In one of my last posts I give a short tutorial how to setup a Ceph cluster on Debian. Also take a look at the Imixs-Cloud project.

Static Persistence Volumes

When we talk about Kuberentes and Persistence Volumes often you will find examples working with a so called storage class and Dynamic Persistence Volumes. In this concept a persistence volume will be provisioned automatically by the Kubernetes CSI adapter and you do not need to think much about how this works. But this kind of persistence volumes are not durable which means, that if you delete your POD also the persistence volume will be removed and all the data you container wrote so far will be lost. To avoid this, you need a so called Static Persistence Volume. Such a persistence volume is marked with the flag ‘Retain’:

persistentVolumeReclaimPolicy: Retain

This means the volume will not be deleted when the POD is removed or updated.

To setup a Static Persistence Volume in Ceph, two steps are necessary. Fist you need to create the ceph image on you ceph cluster. This can be done form the ceph web admin interface or from the command line tool:

# rbd create test-image --size=1024 --pool=kubernetes --image-feature layering

Next you can define the corresponding Kubernetes Persistence Volume Object referring to this RBD image:

apiVersion: v1
kind: PersistentVolume
  name: rbd-static-pv
  volumeMode: Filesystem
  storageClassName: ceph
  persistentVolumeReclaimPolicy: Retain
  - ReadWriteOnce
    storage: 1Gi
    driver: rbd.csi.ceph.com
    fsType: ext4
      name: csi-rbd-secret
      namespace: ceph-system
      clusterID: "<clusterID>"
      pool: "kubernetes"
      staticVolume: "true"
      # The imageFeatures must match the created ceph image exactly!
      imageFeatures: "layering"
    volumeHandle: test-image 

Replace <clusterID> with the id of you ceph cluster. Note: also a storage class is needed here to identify the ceph nodes. Find more details here.

Resizing Static Persistence Volumes

So are everything is working fine using Ceph for static persistence volumes. But it becomes a little bit tricky if you need to resize an image. Imagine you are running a database and the calculated storage you need exceeds the size you planed in the beginning.

In this case you first need to resize the ceph image. This can be done easily form the Ceph web admin interface or from the command line tool.

# rbd resize --image foo --size 2048

But the problem is, that after you delete and redeploy your POD in Kubernetes it will still see the old disk size. This happens because the Ceph CSI Plugin did not support automatically resizing of static volumes.

If you are using the fsType ext4 (as in my example) you can run the resize2fs command from within your POD to give your container the correct new size:

# resize2fs /dev/rbd[number]

You need to replace [number] with the correct rbd image mounted within your POD. You can check the rdb number with the command df -h.

Note: The command will only work if the resize2fs lib is installed on your container (which is for example the case for the official PostgreSQL image). Also it is important for this command that your POD runs with the securityContext privileged=true :

            - name: volume-to-resize
              mountPath: /var/lib/data
            privileged: true

Using a Kubernetes Job

As an alternative to executing the resize2fs command manually you can also start a simple Kubernetes job to resize your RBD images automatically.

# This job can be used to resize a ext4 filesystem
# aligned to the given size of the underlying RBD image.
apiVersion: batch/v1
kind: Job
  name: ext4-resize2fs
        - name: debian
          image: debian

          command: ["/bin/sh"]
            - -c
            - >-
                echo '******** start resizeing block device  ********' &&
                echo ...find rbd mounts to be resized.... &&
                df | grep /rbd &&
                DEVICE=`df | grep /rbd | awk '{print $1}'` &&
                echo ...resizing device $DEVICE ... &&
                resize2fs $DEVICE &&
                echo '******** resize block device completed ********'

            - name: volume-to-resize
              mountPath: /tmp/mount2resize
            privileged: true
        - name: volume-to-resize
            claimName: test-pg-dbdata
      restartPolicy: Never
  backoffLimit: 1

Make sure that the PV and PVC objects exist before you run the job. Replace the PVC with the name of your PVC to be resized.

$ kubectl apply -f resize2fs.yaml

If you have any comments please post them here.

Monitoring Web Servers Should Never be Complex

If you run several web servers in your organisation or even public web servers in the internet you need some kind of monitoring. If your servers go down for some reason this may not be funny for your colleagues, customer and even for yourself. For that reason we use monitoring tools. And there are a lot of monitoring tools available providing all kinds of features and concepts. For example you can monitor the behaviour of your applications, the hardware usage of your server nodes, or even the network traffic between servers. One prominent solution is the open source tool Nagios which allows you to monitor hardware in every detail. In Kubernetes environments you may use the Prometeus/Grafana Operator, which integrates into the concept of Kubernetes providing a lot of different export services to monitor a cluster in various ways. And also there is a large market providing monitoring solutions running in the cloud. The cloud solutions advertise that no complex installation is required. But personally I wonder if it is a good idea to send application and hardware metrics to a third party service.

Continue reading “Monitoring Web Servers Should Never be Complex”

Running CockroachDB on Kubernetes

In my last blog I explained how to run the CockroachDB in a local dev environment with the help from docker-compose. Now I want to show how to setup a CockroachDB cluster in Kubernetes.

The CockroachDB is a distributed SQL database with a build in replication mechanism. This means that the data is replicated over several nodes in a database cluster. This increases the scalability and resilience in the case that a single node fails. With its Automated-Repair feature the database also detects data inconsistency and automatically fixes faulty data on disks. The project is Open Source and hosted on Github.

Supporting the PostgreSQL wire protocol, CockroachDB can be used out of the box for the Java Enterprise Applications and Microservices using the standard PostgresSQL JDBC driver.

Note: CockroachDB does not support the isolation level of transactions required for complex business logic. For that reason the Imixs-Workflow project does NOT recommend the usage of CockroachDB. See also the discussion here.

Continue reading “Running CockroachDB on Kubernetes”

Monitoring Your Kubernetes Cluster the Right Way

Monitoring a Kubernetes cluster seems not to be so difficult as you look at the hundreds of blogs and tutorials. But there is a problem – it is the dynamic and rapid development of Kubernetes. And so you will find many blog posts describing a setup that may not work properly for your environment anymore. This is not because the author has provided a bad tutorial, but only because the article is maybe older than one year. Many things have changed in Kubernetes and it is the area of metrics and monitoring that is affected often.

For example, you will find many articles describing how to setup the cadvisor service to get container metrics. But this technology has become part of kubelet in the meantime so an additional installation should not be necessary anymore and can lead to incorrect metrics in the worst case. Also the many Grafana boards to display metrics have also evolved. Older boards are usually no longer suitable to be used in a new Kubernetes environment.

Therefore in this tutorial, I would like to show how to set up a monitoring correctly in the current version of Kubernetes 1.19.3. And of course also this blog post will be outdated after some time. So be warned 😉

Continue reading “Monitoring Your Kubernetes Cluster the Right Way”

How-to Optimize Memory Consumption for Java Containers Running in Kubernetes

When I started migrating my Application servers (Wildfly 20.0.1) into a self-managed Kubernetes cluster, I noticed unexpected memory behaviour. My Wildfly containers were consuming more memory as I expected. In this blog I will explain why this may happen and how you can control and optimize your memory settings. In this blog I am using the official Wildfly 20.0.1 which is based on OpenJDK 11. But the rules explained here can be of course adapted also for any other Java Application Server.

Notice: Since Java 10 the memory management of a container changed dramatically. Before Java 10 a JVM running in Docker looked on the memory setting of the host which typically provided much more memory as defined by the single Docker container. Here we look only on Java version 10 and above! Read this blog to learn more about the background.

Continue reading “How-to Optimize Memory Consumption for Java Containers Running in Kubernetes”

Grafana – How to Build a Datatable Form Different Queries

In this tutorial I will show how you can combine different data queries in one Datatable. The scenario I came up to this requirement was a Kubernetes Dashboard where I wanted to combine the CPU and Memory Used of each Node with the OsVersion and the Docker Version. These metrics came form different sources the CPU und Memory the corresponding node_cpu_ and node_memory_ metrics provided by the Node Exporter and the OsVersion for example is provided by the cadvisor_version_info metric. Its a little bit tricky to come to the following output:

Continue reading “Grafana – How to Build a Datatable Form Different Queries”

Kustomize your Kubernetes Deployments

When you start working with Kubernetes, you may get to a point where you’re shocked at how complex your YAML files have become. For a complex application consisting of different containers your YAML files will become very very long and it will become harder to change a single piece of configuration like the name of your application without breaking things. This is also known as the YAML hell.

A lot has already been written about how to work around this. Bash programmers write their own scripts and you may have already heard of the tool Helm Charts. I myself am not a very good Bash programmer and also I am not a friend of Helm Charts, because they only make the topic worse. The good news is that there is already an official solution called Kustomize. This declarative approach was originally a separate project which has become a part of Kubernetes since version 1.14. So there is no longer any reason to deal with endlessly long YAML files or Helm Charts if you just want to customize some details of your Kubernetes deployments. And you don not need to install any additional tools for this!

Note: Because of the very rapid development within the open source project Kubernetes, also good tutorials can quickly become obsolete. So be very careful about reading deployment tutorials written before May 2019!

In the following section I will give a brief an simple introduction about how to use Kustomize. You can find more details on the Kubernetes page. Also a good introduction about Kustomize can be found here.

Continue reading “Kustomize your Kubernetes Deployments”

Kubernetes – How to map Config Files

If you are familiar with Docker than you may know that it is a common practise for Docker containers to map local config files. For example in a docker-compose.yaml file you can use the following kind of mapping:

    image: concourse/concourse
    ports: ["8080:8080"]
    volumes: ["./keys/web:/concourse-keys"]

In this example I map the local directory /keys/web/ into a directory /etc/config in my container. In this way my container can read config files or other kind of file data.

Kubernetes – ConfigMap

In Kubernetes there is also such a concept. And as expected for Kubernetes it is much more powerful as in plain docker. But who expects the mapping of config files is hidden behind a concept called ConfigMap?

A ConfigMap in Kubernetes is a very flexible object to be used to provide a docker container with any kind of file data. Typically you store variables as key/value pars in a config map and you can provide these key/value pairs to a Kubernetes pod for example as environment variables. But not only property files can be setup with a ConfigMap, but also public/private keys or even binary data. One way to use a ConfigMap is to publish entire directories to a pod. I will explain this in the following example:

Continue reading “Kubernetes – How to map Config Files”