Ceph Pacific running on Debian 11 (Bullseye)

In this tutorial I will explain how to setup a Ceph Cluster on Debian 11. The Linux Distribution is not as relevant as it sounds but for the latest Ceph release Pacific I am using here also the latest Debian release Bullseye.

In difference to my last tutorial how to setup Ceph I will focus a little bit more on network. Understanding and configuring the Ceph network options will ensure optimal performance and reliability of the overall storage cluster. See also the latest configuration guide from Red Hat.

Continue reading “Ceph Pacific running on Debian 11 (Bullseye)”

Kubernetes, Ceph and Static Volumes

Ceph is an open source distributed storage system which integrates with the concept of Kubernetes in a perfect way. With the Ceph CSI-Plugin you can connect a Ceph cluster into your Kubernetes cluster in a well designed way. In one of my last posts I give a short tutorial how to setup a Ceph cluster on Debian. Also take a look at the Imixs-Cloud project.

Static Persistence Volumes

When we talk about Kuberentes and Persistence Volumes often you will find examples working with a so called storage class and Dynamic Persistence Volumes. In this concept a persistence volume will be provisioned automatically by the Kubernetes CSI adapter and you do not need to think much about how this works. But this kind of persistence volumes are not durable which means, that if you delete your POD also the persistence volume will be removed and all the data you container wrote so far will be lost. To avoid this, you need a so called Static Persistence Volume. Such a persistence volume is marked with the flag ‘Retain’:

persistentVolumeReclaimPolicy: Retain

This means the volume will not be deleted when the POD is removed or updated.

To setup a Static Persistence Volume in Ceph, two steps are necessary. Fist you need to create the ceph image on you ceph cluster. This can be done form the ceph web admin interface or from the command line tool:

# rbd create test-image --size=1024 --pool=kubernetes --image-feature layering

Next you can define the corresponding Kubernetes Persistence Volume Object referring to this RBD image:

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: rbd-static-pv
spec:
  volumeMode: Filesystem
  storageClassName: ceph
  persistentVolumeReclaimPolicy: Retain
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  csi:
    driver: rbd.csi.ceph.com
    fsType: ext4
    nodeStageSecretRef:
      name: csi-rbd-secret
      namespace: ceph-system
    volumeAttributes:
      clusterID: "<clusterID>"
      pool: "kubernetes"
      staticVolume: "true"
      # The imageFeatures must match the created ceph image exactly!
      imageFeatures: "layering"
    volumeHandle: test-image 

Replace <clusterID> with the id of you ceph cluster. Note: also a storage class is needed here to identify the ceph nodes. Find more details here.

Resizing Static Persistence Volumes

So are everything is working fine using Ceph for static persistence volumes. But it becomes a little bit tricky if you need to resize an image. Imagine you are running a database and the calculated storage you need exceeds the size you planed in the beginning.

In this case you first need to resize the ceph image. This can be done easily form the Ceph web admin interface or from the command line tool.

# rbd resize --image foo --size 2048

But the problem is, that after you delete and redeploy your POD in Kubernetes it will still see the old disk size. This happens because the Ceph CSI Plugin did not support automatically resizing of static volumes.

If you are using the fsType ext4 (as in my example) you can run the resize2fs command from within your POD to give your container the correct new size:

# resize2fs /dev/rbd[number]

You need to replace [number] with the correct rbd image mounted within your POD. You can check the rdb number with the command df -h.

Note: The command will only work if the resize2fs lib is installed on your container (which is for example the case for the official PostgreSQL image). Also it is important for this command that your POD runs with the securityContext privileged=true :

....         
          volumeMounts:
            - name: volume-to-resize
              mountPath: /var/lib/data
          securityContext:
            privileged: true

Using a Kubernetes Job

As an alternative to executing the resize2fs command manually you can also start a simple Kubernetes job to resize your RBD images automatically.

---
###################################################
# This job can be used to resize a ext4 filesystem
# aligned to the given size of the underlying RBD image.
###################################################
apiVersion: batch/v1
kind: Job
metadata:
  name: ext4-resize2fs
spec:
  template:
    spec:
      containers:
        - name: debian
          image: debian

          command: ["/bin/sh"]
          args:
            - -c
            - >-
                echo '******** start resizeing block device  ********' &&
                echo ...find rbd mounts to be resized.... &&
                df | grep /rbd &&
                DEVICE=`df | grep /rbd | awk '{print $1}'` &&
                echo ...resizing device $DEVICE ... &&
                resize2fs $DEVICE &&
                echo '******** resize block device completed ********'

          volumeMounts:
            - name: volume-to-resize
              mountPath: /tmp/mount2resize
          securityContext:
            privileged: true
      volumes:
        - name: volume-to-resize
          persistentVolumeClaim:
            claimName: test-pg-dbdata
      restartPolicy: Never
  backoffLimit: 1

Make sure that the PV and PVC objects exist before you run the job. Replace the PVC with the name of your PVC to be resized.

$ kubectl apply -f resize2fs.yaml

If you have any comments please post them here.

Running Gitea on a Virtual Cloud Server

Gitea is an open source, self hosted git repository with a powerful web UI. In the following short tutorial I will explain (and remember myself) how to setup Gitea on single virtual cloud server. In one of my last posts I showed how to run Gitea in a Kubernetes cluster. As I am using git also to store my kubernetes environment configuration it’s better to run the git repo outside of a cluster. In the following tutorial I will show how to setup a single cloud node running Gitea on Docker.

Continue reading “Running Gitea on a Virtual Cloud Server”

NFS and Iptables

These days I installed a NFS Server to backup my Kubernetes Cluster. Even as I protected the NSF server via the exports file to allow only cluster members to access the server there was still a new security risk. NSF comes together with the Remote Procedure Call Daemon (RPC). And this daemon enables attackers to figure out information about your network. So it is a good idea to protect the RPC which is running on port 111 from abuse.

To test if your server has an open rpc port you can run a telnet from a remote node:

$ telnet myserver.foo.com 111
Trying xxx.xxx.xxx.xxx...
Connected to myserver.foo.com.

This indicates that rpc is visible from the internet. You can check the rpc ports on your server also with:

$ rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper

Iptables

If you run Kubernetes or Docker on a sever you usually have already Iptables installed. You can test this by listing existing firewall rules. With the option -L you can list all existing rules:

$ iptables -L 
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy DROP)
target     prot opt source               destination         
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:9042
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:afs3-callback

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere

This is an typical example you will see on a sever with Docker daemon installed. Beside the three default chains ‘INPUT’, ‘FORWARD’ and ‘OUTPUT’ there are also some custom Docker chains describing the rules.

So the goal is to add a new CHAIN containing rules to protect the RPC daemon from abuse.

Backup your Origin iptables

Before you start adding new rules make a backup of your origin rule set:

$ iptables-save > iptables-backup

This file can help you if something goes wrong later…

Adding a RPC Rule

If you want to use RPC in the internal network but prohibit it from the outside, then you can implement the following iptables. In this example I explicitly name the cluster nodes which should be allowed to use RPC port 11. All other request to the PRC port will be dropped.

Replace [SEVER-NODE-IP] with the IP address from your cluster node:

$ iptables -A INPUT -s [SERVER-NODE-IP] -p tcp --dport 111 -j ACCEPT
$ iptables -A INPUT -s [SERVER-NODE-IP] -p udp --dport 111 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 111 -j DROP
$ iptables -A INPUT -p udp --dport 111 -j DROP

This rule explicitly allows SEVER-NODE-IP to access the service and all other clients will be drop. You can easily add additional Nodes before the DROP.

You can verify if the new ruleset was added to your existing rules with:

$ iptables -L

You may write a small bash script with all the iptables commands. This makes it more convenient testing your new ruleset.

Saving the Ruleset

Inserting new rules into the firewall carries some risk by its own. If you do something wrong you can lockout your self from your sever. For example if you block the SSH port 22.

The good thing is, that rules created with the iptables command are stored in memory. If the system is restarted before saving the iptables rule set, all rules are lost. So in the worst case you can reboot your server to reset your new rules.

If you have tested your rules than you can persist the new ruleset.

$ iptables-save > iptables-newruleset

After a reboot yor new rules will still be ignored. To tell debian to use the new ruleset you have to store this ruleset into /etc/iptables/rules.v4

$ iptables-save > /etc/iptables/rules.v4

Finally you can restart your server. The new rpc-rules will be applied during boot.

How to Set Timezone and Locale for Docker Image

When I deployed my java applications on the latest Wildfly Docker image I noticed a missing language support for German Umlaute and the timezone CET. So the question was: How can I change this general setting in a Docker container?

Verify Timezone and Language

To verify the timezone and language supported by your running docker container can verify the settings by executing the date and locale command:

$ docker exec <CONTAINER-ID> date
Fri Oct 23 15:21:54 UTC 2020

$ docker exec <CONTAINER-ID> locale
LANG=
LC_CTYPE="POSIX"
LC_NUMERIC="POSIX"
LC_TIME="POSIX"
LC_COLLATE="POSIX"
LC_MONETARY="POSIX"
LC_MESSAGES="POSIX"
LC_PAPER="POSIX"
LC_NAME="POSIX"
LC_ADDRESS="POSIX"
LC_TELEPHONE="POSIX"
LC_MEASUREMENT="POSIX"
LC_IDENTIFICATION="POSIX"
LC_ALL=

Replace <CONTAINER-ID> with the docker id of your running container.

In this example we can see that the container is supporting only the most basic setup. But there are ways to change these settings.

Changing Timezone and Locale by Environment Variables

I most cases you can adjust language and timezone with the standard Linux environment variables TZ, LANG, LANGUAGE and LC_ALL. See the following example:

docker run -e TZ="CET" \
   -e LANG="de_DE.UTF-8" \
   -e LANGUAGE="de_DE:de" \
   -e LC_ALL="en_US.UTF-8" \
   -it jboss/wildfly

In this example I run the official Wildfly container with timezone CET and locale de_DE. You can verify the settings again with the date and locale command.

In most cases it will be sufficient to just set the timezone and the en_US UTF-8 support:

docker run -e TZ="CET" \
   -e LANG="en_US.UTF-8" \
   -it jboss/wildfly

Changing Timezone and Language by Dockerfile

Another way is to change the Docker image during build time:

FROM jboss/wildfly:20.0.1.Final

# ### Locale support de_DE and timezone CET ###
USER root
RUN localedef -i de_DE -f UTF-8 de_DE.UTF-8
RUN echo "LANG=\"de_DE.UTF-8\"" > /etc/locale.conf
RUN ln -s -f /usr/share/zoneinfo/CET /etc/localtime
USER jboss
ENV LANG de_DE.UTF-8
ENV LANGUAGE de_DE.UTF-8
ENV LC_ALL de_DE.UTF-8
### Locale Support END ###

CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]

In this example I run the localdef and change the language in /etc/locale.conf/

This will build a complete new Image with standard Locale ‘de_DE’ and timzone CET.

Debian Upgrade form Stretch to Buster

Today I updated my Debian system on my working mashine form Stretch to Buster. This is just a short reminder for myself. I followed the blog form nixcraft here.

1. Check version

First check your current version and note if you need it for fixing problems later.

$ lsb_release -a
 No LSB modules are available.
 Distributor ID: Debian
 Description:    Debian GNU/Linux 9.12 (stretch)
 Release:        9.12
 Codename:       stretch
$ uname -mrs
 Linux 4.9.0-12-amd64 x86_64

2. Update installed packages

First bring your strech up to date by updating all your installed packages

$ sudo apt update
$ sudo apt upgrade
$ sudo apt full-upgrade
$ sudo apt --purge autoremove

3. Update /etc/apt/sources.list file

Now you can update your debian package source list from stretch to buster. First check your current versions:

$ cat /etc/apt/sources.list

Now you need to replace the strech references with buster. This can be done quickly with the sed command:

$ sudo cp -v /etc/apt/sources.list /root/
$ sudo cp -rv /etc/apt/sources.list.d/ /root/
$ sudo sed -i 's/stretch/buster/g' /etc/apt/sources.list
$ sudo sed -i 's/stretch/buster/g' /etc/apt/sources.list.d/*

You can verify our new list

$ cat /etc/apt/sources.list 

Finally update you new package list

$ sudo apt update

4. Minimal system upgrade

In the next step you perform a minimal system update to avoid removing to much packages.

$ sudo apt upgrade

Answer question about restart of services wit ‘yes’

5. Upgrading Debain 9 to Debian 10

No you can run the real upgrade to Debain 10.

$ sudo apt full-upgrade

…this will take some time…

6. Reboot and Verification

Finally reboot your system…

$ sudo reboot

…and check the new version

$ uname -r
$ lsb_release -a

7. Clean up

Rund the autoremove command to get rid of old packages:

$ sudo apt --purge autoremove

Thats it! Have fun with fine tuning your new Debian Buster!

An Alternative for the Google App Postman….

Until today I worked with the Rest-API Testing Tool Postman from Google. But this tool was making me more and more suspicious in the last time. For example, you are recently asked to sign in with a Google account. So it was time to lookout for an alternative. And finally I found Insomnia. It is open source and hosted on Github.

https://insomnia.rest/

To install the tool on Linux/Debian run:

# Add to sources
echo "deb https://dl.bintray.com/getinsomnia/Insomnia /" \
| sudo tee -a /etc/apt/sources.list.d/insomnia.list

# Add public key used to verify code signature
wget --quiet -O - https://insomnia.rest/keys/debian-public.key.asc \
| sudo apt-key add -

# Refresh repository sources and install Insomnia
sudo apt-get update
sudo apt-get install insomnia

Maven on Debian failed with JDK 1.9 – ClassFormatError

Today I ran into a problem with Maven under Debian GNU/Linux 9 (stretch). My Debian has OpenJDK 8 as also OpenJDK 9 installed. OpenJDK 9 is my default java version. My Maven version is 3.3.9.

mvn -version
Apache Maven 3.3.9
Maven home: /usr/share/maven
Java version: 9-Debian, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-9-openjdk-amd64
Default locale: de_DE, platform encoding: UTF-8
OS name: "linux", version: "4.9.0-6-amd64", arch: "amd64", family: "unix"

The problem is, that building one of my projects results in the following ClassFormat Error:

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project imixs-workflow-core: Execution default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test failed: There was an error in the forked process
[ERROR] java.lang.ClassFormatError: Absent Code attribute in method that is not native or abstract in class file javax/xml/bind/JAXBException
[ERROR] at java.base/java.lang.ClassLoader.defineClass1(Native Method)
[ERROR] at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1007)
[ERROR] at java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:174)
[ERROR] at java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:801)
[ERROR] at java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:699)
[ERROR] at java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:622)
[ERROR] at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:580)
[ERROR] at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:185)
[ERROR] at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:496)
[ERROR] at java.base/java.lang.Class.getDeclaredMethods0(Native Method)
[ERROR] at java.base/java.lang.Class.privateGetDeclaredMethods(Class.java:3139)
[ERROR] at java.base/java.lang.Class.getMethodsRecursive(Class.java:3280)
[ERROR] at java.base/java.lang.Class.getMethod0(Class.java:3266)
[ERROR] at java.base/java.lang.Class.getMethod(Class.java:2063)
[ERROR] at org.apache.maven.surefire.util.ReflectionUtils.tryGetMethod(ReflectionUtils.java:57)
[ERROR] at org.apache.maven.surefire.common.junit3.JUnit3TestChecker.isSuiteOnly(JUnit3TestChecker.java:64)
[ERROR] at org.apache.maven.surefire.common.junit3.JUnit3TestChecker.isValidJUnit3Test(JUnit3TestChecker.java:59)
[ERROR] at org.apache.maven.surefire.common.junit3.JUnit3TestChecker.accept(JUnit3TestChecker.java:54)
[ERROR] at org.apache.maven.surefire.common.junit4.JUnit4TestChecker.accept(JUnit4TestChecker.java:52)
[ERROR] at org.apache.maven.surefire.util.DefaultScanResult.applyFilter(DefaultScanResult.java:97)
[ERROR] at org.apache.maven.surefire.junit4.JUnit4Provider.scanClassPath(JUnit4Provider.java:206)
[ERROR] at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:103)
[ERROR] at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
[ERROR] at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
[ERROR] at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
[ERROR] -> [Help 1]

The problem is a incompatibility of maven 3.3.9 with JDK 1.9. To solve this problem I switch to my OpenJDK 8 by setting the JAVA_HOME variable explicitly to JDK8 :

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/

After this change maven install works fine again in my projects.

To make the JAVA_HOME setting permanent follow this stepts.

1. find your preferred java version

ls -la /usr/lib/jvm/

2. edit the file /etc/profile and add the following line with the path to the JDK 8 you find before:

export JAVA_HOME="path to jdk8 that you found"

3. verify you maven installation

mvn -version

You maven should now be set to JDK8.

I hope this post will become obsolete in near future.

Debian – user Bluetooth Speackerbox with Micro

Today I tried to connect a “August MS425” speaker box with my Linux Notebook running on Debian Jessie. To get the bluetooth connection running I installed the following additional packages:

sudo apt-get install pulseaudio-module-bluetooth, ofono, pavucontrol

After I established the bluetooth connection the speaker box was not working or displayed in the list of audio devices. To solve this issue I found this discussion.

The missing part was the the auto-connect a2dp option for the new device. So I had to edit the file “/etc/pulse/default.pa” and added the following line :

load-module module-switch-on-connect

After a reboot it works fine. You can use pavucontrol to setup audio settings.