Running Gitea on a Virtual Cloud Server

Gitea is an open source, self hosted git repository with a powerful web UI. In the following short tutorial I will explain (and remember myself) how to setup Gitea on single virtual cloud server. In one of my last posts I showed how to run Gitea in a Kubernetes cluster. As I am using git also to store my kubernetes environment configuration it’s better to run the git repo outside of a cluster. In the following tutorial I will show how to setup a single cloud node running Gitea on Docker.

Continue reading “Running Gitea on a Virtual Cloud Server”

NFS and Iptables

These days I installed a NFS Server to backup my Kubernetes Cluster. Even as I protected the NSF server via the exports file to allow only cluster members to access the server there was still a new security risk. NSF comes together with the Remote Procedure Call Daemon (RPC). And this daemon enables attackers to figure out information about your network. So it is a good idea to protect the RPC which is running on port 111 from abuse.

To test if your server has an open rpc port you can run a telnet from a remote node:

$ telnet myserver.foo.com 111
Trying xxx.xxx.xxx.xxx...
Connected to myserver.foo.com.

This indicates that rpc is visible from the internet. You can check the rpc ports on your server also with:

$ rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper

Iptables

If you run Kubernetes or Docker on a sever you usually have already Iptables installed. You can test this by listing existing firewall rules. With the option -L you can list all existing rules:

$ iptables -L 
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy DROP)
target     prot opt source               destination         
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:9042
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:afs3-callback

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere

This is an typical example you will see on a sever with Docker daemon installed. Beside the three default chains ‘INPUT’, ‘FORWARD’ and ‘OUTPUT’ there are also some custom Docker chains describing the rules.

So the goal is to add a new CHAIN containing rules to protect the RPC daemon from abuse.

Backup your Origin iptables

Before you start adding new rules make a backup of your origin rule set:

$ iptables-save > iptables-backup

This file can help you if something goes wrong later…

Adding a RPC Rule

If you want to use RPC in the internal network but prohibit it from the outside, then you can implement the following iptables. In this example I explicitly name the cluster nodes which should be allowed to use RPC port 11. All other request to the PRC port will be dropped.

Replace [SEVER-NODE-IP] with the IP address from your cluster node:

$ iptables -A INPUT -s [SERVER-NODE-IP] -p tcp --dport 111 -j ACCEPT
$ iptables -A INPUT -s [SERVER-NODE-IP] -p udp --dport 111 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 111 -j DROP
$ iptables -A INPUT -p udp --dport 111 -j DROP

This rule explicitly allows SEVER-NODE-IP to access the service and all other clients will be drop. You can easily add additional Nodes before the DROP.

You can verify if the new ruleset was added to your existing rules with:

$ iptables -L

You may write a small bash script with all the iptables commands. This makes it more convenient testing your new ruleset.

Saving the Ruleset

Inserting new rules into the firewall carries some risk by its own. If you do something wrong you can lockout your self from your sever. For example if you block the SSH port 22.

The good thing is, that rules created with the iptables command are stored in memory. If the system is restarted before saving the iptables rule set, all rules are lost. So in the worst case you can reboot your server to reset your new rules.

If you have tested your rules than you can persist the new ruleset.

$ iptables-save > iptables-newruleset

After a reboot yor new rules will still be ignored. To tell debian to use the new ruleset you have to store this ruleset into /etc/iptables/rules.v4

$ iptables-save > /etc/iptables/rules.v4

Finally you can restart your server. The new rpc-rules will be applied during boot.

Migrating to Jakarta EE 9

In this blog post I will document the way, we at Imixs-Workflow migrated from Java EE to Jakarta EE 9. The Java Enterprise Stack has always been known for providing a very reliable and stable platform for developers. We at Imixs started with Java EE in the early beginnings in the year 2003. At that time Java EE was not comparable to the platform we know today. For me the most impressive part of the journey with Java EE over the last 17 years was the fact, that you can always trust on the platform. Even if new concepts and features where introduced, your existing code worked. For a human-centric workflow engine, like our open source project Imixs-Workflow, this is an important aspect. A workflow engine have to be sustainable. A long running business process my take years from its creation to its final state. An insurance process is one example of this kind of a business process. I personally run customer projects, started running Imixs-Workflow on Glassfish, switched to JBoss, migrated to Payara and run today on Wildfly. Upgrading the Java EE version and switching the server platform was never something special about which you had to write a lot. But with Jakarta EE9 the situation changed dramatically.

Continue reading “Migrating to Jakarta EE 9”

Monitoring Your Kubernetes Cluster the Right Way

Monitoring a Kubernetes cluster seems not to be so difficult as you look at the hundreds of blogs and tutorials. But there is a problem – it is the dynamic and rapid development of Kubernetes. And so you will find many blog posts describing a setup that may not work properly for your environment anymore. This is not because the author has provided a bad tutorial, but only because the article is maybe older than one year. Many things have changed in Kubernetes and it is the area of metrics and monitoring that is affected often.

For example, you will find many articles describing how to setup the cadvisor service to get container metrics. But this technology has become part of kubelet in the meantime so an additional installation should not be necessary anymore and can lead to incorrect metrics in the worst case. Also the many Grafana boards to display metrics have also evolved. Older boards are usually no longer suitable to be used in a new Kubernetes environment.

Therefore in this tutorial, I would like to show how to set up a monitoring correctly in the current version of Kubernetes 1.19.3. And of course also this blog post will be outdated after some time. So be warned 😉

Continue reading “Monitoring Your Kubernetes Cluster the Right Way”

Java Docker Container ignores Memory Limits in Kubernetes

After I deployed several Java Docker containers on my self managed Kubernetes cluster I recognized that the containers consume much more memory as defined in the Kubernetes resource limits.

        ....
        resources:
          requests:
            memory: "512Mi"
          limits:
            memory: "1Gi"
        ....

The Containers run OpenJDK 11 so per default it should respect the container memory limits and not overrun them. Running the same container with plain docker on the same worker node the memory limits where resprected:

$ docker run -it --rm --name java-test -p 8080:8080 -e JAVA_OPTS='-XX:MaxRAMPercentage=75.0' -m=300M jboss/wildfly:20.0.1.Final

$ docker stats
CONTAINER ID        NAME          CPU %     MEM USAGE / LIMIT     MEM %     NET I/O       BLOCK I/O    PIDS
515e549bc01f        java-test     0.14%     219MiB / 300MiB       73.00%    906B / 0B     0B / 0B      43

But starting same container with kubectl the memory limits were ignored

$ kubectl run java-test --image=jboss/wildfly:20.0.1.Final --limits='memory=300M' --env="JAVA_OPTS='-XX:MaxRAMPercentage=75.0'" 

$ kubectl top pod java-wildfly-test
NAME                CPU(cores)   MEMORY(bytes)   
java-wildfly-test   1089m        441Mi 

After several days of research I finally found the root of this strange behaviour. In my environment kubelet and the Docker daemon used a different cgroupDriver!

How to Verify cgroupDriver

To verify if kubelet and docker are using the same cgroupDriver you can use the following commands:

$ sudo cat /var/lib/kubelet/config.yaml | grep cgroupDriver
cgroupDriver: systemd

$ sudo docker info | grep -i cgroup
Cgroup Driver: systemd

In this example both use systemd which is typical for Kubernetes since version 1.19.3

But if for example the kubelet shows no cgroupDriver entry you need to fix this.

How to Set cgroupDriver

To fix the cgroupDriver entry for kubelet just edit the file

/var/lib/kubelet/config.yaml

and search for the entry

cgroupDriver: systemd

If it is not set just add the entry into the config file.

Finally you need to restart kubelet

$ systemctl daemon-reload
$ systemctl restart kubelet

The Metrics Server

To get the correct metrics displayed with kubectl top you need to install the open source project metrics-server. This service provides a scalable, efficient source of container resource metrics like CPU, memory, disk and network. These are also referred to as the “Core” metrics. The Kubernetes Metrics Server is collecting and aggregating these core metrics in your cluster and is used by other Kubernetes add ons, such as the Horizontal Pod Autoscaler or the Kubernetes Dashboard.

How to Set Timezone and Locale for Docker Image

When I deployed my java applications on the latest Wildfly Docker image I noticed a missing language support for German Umlaute and the timezone CET. So the question was: How can I change this general setting in a Docker container?

Verify Timezone and Language

To verify the timezone and language supported by your running docker container can verify the settings by executing the date and locale command:

$ docker exec <CONTAINER-ID> date
Fri Oct 23 15:21:54 UTC 2020

$ docker exec <CONTAINER-ID> locale
LANG=
LC_CTYPE="POSIX"
LC_NUMERIC="POSIX"
LC_TIME="POSIX"
LC_COLLATE="POSIX"
LC_MONETARY="POSIX"
LC_MESSAGES="POSIX"
LC_PAPER="POSIX"
LC_NAME="POSIX"
LC_ADDRESS="POSIX"
LC_TELEPHONE="POSIX"
LC_MEASUREMENT="POSIX"
LC_IDENTIFICATION="POSIX"
LC_ALL=

Replace <CONTAINER-ID> with the docker id of your running container.

In this example we can see that the container is supporting only the most basic setup. But there are ways to change these settings.

Changing Timezone and Locale by Environment Variables

I most cases you can adjust language and timezone with the standard Linux environment variables TZ, LANG, LANGUAGE and LC_ALL. See the following example:

docker run -e TZ="CET" \
   -e LANG="de_DE.UTF-8" \
   -e LANGUAGE="de_DE:de" \
   -e LC_ALL="en_US.UTF-8" \
   -it jboss/wildfly

In this example I run the official Wildfly container with timezone CET and locale de_DE. You can verify the settings again with the date and locale command.

In most cases it will be sufficient to just set the timezone and the en_US UTF-8 support:

docker run -e TZ="CET" \
   -e LANG="en_US.UTF-8" \
   -it jboss/wildfly

Changing Timezone and Language by Dockerfile

Another way is to change the Docker image during build time:

FROM jboss/wildfly:20.0.1.Final

# ### Locale support de_DE and timezone CET ###
USER root
RUN localedef -i de_DE -f UTF-8 de_DE.UTF-8
RUN echo "LANG=\"de_DE.UTF-8\"" > /etc/locale.conf
RUN ln -s -f /usr/share/zoneinfo/CET /etc/localtime
USER jboss
ENV LANG de_DE.UTF-8
ENV LANGUAGE de_DE.UTF-8
ENV LC_ALL de_DE.UTF-8
### Locale Support END ###

CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]

In this example I run the localdef and change the language in /etc/locale.conf/

This will build a complete new Image with standard Locale ‘de_DE’ and timzone CET.

How-to Optimize Memory Consumption for Java Containers Running in Kubernetes

When I started migrating my Application servers (Wildfly 20.0.1) into a self-managed Kubernetes cluster, I noticed unexpected memory behaviour. My Wildfly containers were consuming more memory as I expected. In this blog I will explain why this may happen and how you can control and optimize your memory settings. In this blog I am using the official Wildfly 20.0.1 which is based on OpenJDK 11. But the rules explained here can be of course adapted also for any other Java Application Server.

Notice: Since Java 10 the memory management of a container changed dramatically. Before Java 10 a JVM running in Docker looked on the memory setting of the host which typically provided much more memory as defined by the single Docker container. Here we look only on Java version 10 and above! Read this blog to learn more about the background.

Continue reading “How-to Optimize Memory Consumption for Java Containers Running in Kubernetes”

Quantum Theory and Microservices

I just read an interesting book about quantum theory by Hans-Peter Dürr. In this book Hans-Peter Dürr criticizes the classical physics sciences by describing the constantly attempt to find the smallest component of physics – the atom – in the hope to answer the last question. But it is the quantum theory that shows that this smallest building block does not exist at all and that everything is connected to everything and that there is ultimately only the ONE. I myself find this theory very difficult to understand, but it has brought me to something that we can also observe in modern software architecture – the microservice architecture.

The idea of microservice architecture is to split complex systems into smaller building blocks – the services. This usually works very well in the beginning, up to the point where the individual services have to be connected to each other to meet certain requirements. At this point, the concepts of choreography and orchestration come into play. These concepts are well documented within the microservice architecture by the SAGA Pattern. I have published some blogs and articles on this topic myself. So I don’t think this architecture is a bad idea.

But it is interesting to note that this approach is very similar to the model of classic physics criticized by Hans-Peter Dürr. We build various tiny services and feel very superior in a project, as we can isolate and release a single function in the shortest possible time. But then comes the moment when we have to implement interactions. Our service must cooperate with all the other tiny services. And suddenly things are no longer so simple and isolated. We notice that everything is related and we can only be successful with openness and cooperation. But often the corresponding structures are missing in large software projects. Then we try to insist on the functionality of our so beautiful tiny isolated services. We’re not ready to see the world out there as it really is. And sometimes software projects fail at this point.

Isn’t it surprising that in the end we always keep falling back on the same realization?

Grafana – How to Build a Datatable Form Different Queries

In this tutorial I will show how you can combine different data queries in one Datatable. The scenario I came up to this requirement was a Kubernetes Dashboard where I wanted to combine the CPU and Memory Used of each Node with the OsVersion and the Docker Version. These metrics came form different sources the CPU und Memory the corresponding node_cpu_ and node_memory_ metrics provided by the Node Exporter and the OsVersion for example is provided by the cadvisor_version_info metric. Its a little bit tricky to come to the following output:

Continue reading “Grafana – How to Build a Datatable Form Different Queries”