JsonFroms – Custom Component for SelectItems

JsonForms is a powerful and very flexible framework to build complex forms just from a JSON Schemata. A form can contain various different controls and controls can be separated into categories and layout sections. This makes JsonForrms a first-hand framework if you have to deal with dynamic forms in a JavaScript application.

JsonForms already provides different so called Renderer Sets to handle complex input widgets. But further more you can also create easily you own widget if you have special needs not supported by JsonForms out of the box.

Continue reading “JsonFroms – Custom Component for SelectItems”

Why should I Comment my Code?

Today I want to write about why comments are so important in software development and how you can improve them. For me, the most disappointing thing about good source code is when it is insufficiently commented. And this happens with closed code in the same way as with Open Source code. In the worst case, and this is usually the most common, source code is not commented at all. Especially in open source projects, this is a disdain against the community reading your code. But even in internal software projects, I consider missing comments to be a bad behaviour towards your colleagues, which should be avoided.

Of course, we all know the funny answers in this context. “Good code doesn’t need any comments” or “Take a look into the source code if you don’t understand something.” Is it laziness or ignorance not to write comments? No, I believe that many developers simply do not know how to write good comments. For this reason, I would like to explain here a very simple rule how to really improve your source code through comments.

Don’t Explain me the What and How!

The biggest mistake you can make when writing comments is assuming you have to explain what you are doing. In this case, you write comments that explain the obvious.

/**
 * Builder Class to build Events
 */
public class EventBuilder { 
 ....
}

As long as the name of a class or method is not absolutely meaningless, a name should explain the usage of a class or method. So please do not explain that a Builder Class builds an object or a getter method returns an object. Writing such a comment is a waste of time. By the way, the same applies to those who have to read this kind of useless comments.

A second example for bad comments is explaining how the code works.

/**
 * Build the event and return it.
 */
public T build() {
   ....
   return event;
}

Of course, you shouldn’t assume that the users reading your code are complete idiots. You can assume that other developers can code just as well as you do. And explaining software patterns in the comment of a method or class is pointless.

So now we have seen what bad comments are. But what does a good comment look like?

Start with Why!

There is a great book from Simon Sinek with the Title “Start with Why: How Great Leaders Inspire Everyone to Take Action“. The core message of this book is, that you always should start by explaining the ‘Why’. The ‘What’ and ‘How’ are usually explaining the things later in more detail. For software developing the ‘What’ and ‘How’ is in deed in the code and needs no comments in many cases.

So starting with the ‘Why’ can make source code comments much more useful – for yourself and for others. Starting with ‘Why’ makes you think about what you’re actually doing. So going back to my previous examples a comment may look like this:

/**
 * When using Events in the client, the EventBuilder may help 
 * to construct them in the correct way more easily. 
 */
public class EventBuilder { 
 ....
}

So instead of describing what the class is, answer the question why you have invented it. The same is true for methods. As explained before if I want to understand the ‘How’, than I can read the source code in detail. But why does the class or method exist? This is what a comment is for:

/**
 * To display the summary of an invoice in the client, 
 * this method calculates the summary including all taxes.
 */
public float summarizeInvoice() {
   ....
   return event;
}

I don’t want to go into more details. It’s obvious that the better you explain why you have written a class or method, it helps others like yourself to understand your code. So when you start writing your next code, think about why you’re doing this and comment that into your class or method header. You’ll see – your colleagues and your community will love it and will much prefer using your code.

Switching from containerd to cri-o

The last 3 days I tried to install kubernetes 1.25.4 on a Debian 11 (Bullseye) box without success. The problem was that the kubeadm init process always hangs with the message:

....
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1121 08:17:12.320743    8096 round_trippers.go:466] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.25.4 (linux/amd64) kubernetes/fdc7750" 'https://x.x.y.y:6443/healthz?timeout=10s'
I1121 08:17:12.321047    8096 round_trippers.go:508] HTTP Trace: Dial to tcp:x.x.y.y:6443 failed: dial tcp x.x.y.y:6443: connect: connection refused
....

Even as I tried several different tutorials and guidelines I failed to solve this issue. (See also here)

Using cri-o instead of containerd…..

Kuberentes supports different Container Runtimes. containerd is only one of them. Maybe containerd and Debian 11 are not the best friends. I don’t know…

cri-o is an alterative lightweight Container Runtime for Kubernetes. After I switched from containerd to cri-o everything worked like a charm. So here is my short guideline how to install cri-o on a fresh Debian 11 box.

Note: If you have already installed containerd you need to remove it first!

Install cri-o on Debian 11

As usual for kuberentes first make sure that you have enabled the necessary enable kernel modules and setup the iptables:

$ sudo modprobe overlay
$ sudo modprobe br_netfilter

$ sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

$ sudo sysctl --system

Next you need to add the repositories maintained by opensuse.org:

$ sudo -i
$ OS=Debian_11
$ VERSION=1.23

$ echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list

$ curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -
$ curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -

Now you can start installing cri-o from the new repository:

$ sudo apt update
$ sudo apt-get install -y cri-o cri-o-runc
$ sudo apt-mark hold cri-o cri-o-runc

# Start and enable CRI-O
$ sudo systemctl daemon-reload
$ sudo systemctl enable crio --now

That’s it. To verify if your cri-o runtime is up and running call:

$ sudo systemctl status crio
● crio.service - Container Runtime Interface for OCI (CRI-O)
     Loaded: loaded (/lib/systemd/system/crio.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-11-21 15:14:44 UTC; 21min ago
       Docs: https://github.com/cri-o/cri-o
   Main PID: 2372 (crio)
      Tasks: 12
     Memory: 770.4M
        CPU: 28.268s
     CGroup: /system.slice/crio.service
             └─2372 /usr/bin/crio

Nov 21 15:18:12 master-1 crio[2372]: time="2022-11-21 15:18:12.035184530Z" level=info msg="Created container 0878bbfc957e8a7fb069b83a9101c9386d0bee5ea14c10c81ea0379c14cd3240: kube>
Nov 21 15:18:12 master-1 crio[2372]: time="2022-11-21 15:18:12.038925952Z" level=info msg="Starting container: 0878bbfc957e8a7fb069b83a9101c9386d0bee5ea14c10c81ea0379c14cd3240" id>
Nov 21 15:18:12 master-1 crio[2372]: time="2022-11-21 15:18:12.049779604Z" level=info msg="Started container" PID=9385 containerID=0d50842810cb5a0632b137a16e1d29845f4dc3cb9e8e8fc3>
Nov 21 15:18:12 master-1 crio[2372]: time="2022-11-21 15:18:12.063878077Z" level=info msg="Started container" PID=9383 containerID=0878bbfc957e8a7fb069b83a9101c9386d0bee5ea14c10c8>
Nov 21 15:22:56 master-1 crio[2372]: time="2022-11-21 15:22:56.803994649Z" level=info msg="Checking image status: registry.k8s.io/pause:3.8" id=2521c53d-4b83-4b95-8501-363d83ac149>
Nov 21 15:22:56 master-1 crio[2372]: time="2022-11-21 15:22:56.804517064Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4873874c08efc72e9729683a83ffbb7502ee7>
Nov 21 15:27:56 master-1 crio[2372]: time="2022-11-21 15:27:56.810307321Z" level=info msg="Checking image status: registry.k8s.io/pause:3.8" id=4b38e75e-8e03-4007-8554-323eb6c404a>
Nov 21 15:27:56 master-1 crio[2372]: time="2022-11-21 15:27:56.810772581Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4873874c08efc72e9729683a83ffbb7502ee7>
Nov 21 15:32:56 master-1 crio[2372]: time="2022-11-21 15:32:56.814927245Z" level=info msg="Checking image status: registry.k8s.io/pause:3.8" id=ba81dd6e-491d-43ed-a114-5b69a980569>
Nov 21 15:32:56 master-1 crio[2372]: time="2022-11-21 15:32:56.815618456Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4873874c08efc72e9729683a83ffbb7502ee7

Now you can start init your kubernetes cluster:

$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 

Cassandra – Upgrade from Version 3.11 to 4.0

In my last blog post ‘Setup a Public Cassandra Cluster with Docker‘ I described how to setup a Cassandra Cluster with docker in a public network. The important part of this blog post was how to secure the inter-node and client-node communication in such a scenario. In this bog post I will just cover some details about migrating from version 3.11 to version 4.0.

General Upgrade from 3.x to 4.0

In general it is quite simple to upgrade a Cassandra Node form version 3.x to 4.0 because the version 4.0 can handle the table files from version 3. So at least you need to change your Docker run command pointing to a 4.0 version:

docker run --name cassandra -d \
        -e CASSANDRA_BROADCAST_ADDRESS=<YOUR-PUBLIC-IP> \
        -e CASSANDRA_SEEDS=<COMMA SEPARATED IP LIST OF EXISTING NODES> \
        -p 7000:7000 \
        -p 9042:9042 \
        -v ~/cassandra.yaml:/etc/cassandra/cassandra.yaml\
        -v ~/cqlshrc:/root/.cassandra/cqlshrc\
        -v ~/security:/security\
        -v /var/lib/cassandra:/var/lib/cassandra\
        --restart always\
        cassandra:4.0.6

The cassandra.yaml File

Before you can start the new Cassandara node, you need to update the cassandra.yaml file.

First I recommand to start a local cassandra docker container and copy the origin cassandra.yaml file from the running container. This is necessary because a lot of parameters and settings have change form version 3.x to 4.0

Now you can tweak the cassandra.yaml file. In parallel you can check your current cluster configuration from a running node with docker:

docker exec -it cassandra cat /etc/cassandra/cassandra.yaml

First take care about the following parameters which should be set to your previous configuration settings:

  • cluster_name
  • num_tokens
  • authenticator
  • seed_provider
  • listen_address (usually out comment)
  • broadcast_address
  • broadcast_rpc_address

If you use the server_encryption_options as explained in my last post you need take care about the following sections:

...
server_encryption_options:
    #internode_encryption: none
    internode_encryption: all
    enable_legacy_ssl_storage_port: true
    keystore: /security/cassandra.keystore
    keystore_password: mypassword
    truststore: /security/cassandra.truststore
    truststore_password: mypassword

# enable or disable client/server encryption.
client_encryption_options:
    enabled: true
    optional: false
    keystore: /security/cassandra.keystore
    keystore_password: mypassword
    require_client_auth: false
....
# enable password authentication!
authenticator: PasswordAuthenticator
...

The important change is in the new parameter ‘enable_legacy_ssl_storage_port‘ which need to be set to ‘true’ during migration.

Expose Port 7000

Since version 4.0 the port 7001 is deprecated. This port was used in older version for the encrypted inter-node communication. Now port 7000 is handling both – encypted as also unencrypted communication. So it is sufficient to expose port 7000 now for inter-node communication.

But as long as your cluster contains nods running with version 3.11 you need to set the new parameter ‘enable_legacy_ssl_storage_port‘ to ‘true’. This parameter tells your 4.0 node to use still port 7001 when connecting to older nodes.

    # When set to true, encrypted and unencrypted connections are allowed on the storage_port
    # This should _only be true_ while in unencrypted or transitional operation
    # optional defaults to true if internode_encryption is none
    # optional: true
    # If enabled, will open up an encrypted listening socket on ssl_storage_port. Should only be used
    # during upgrade to 4.0; otherwise, set to false.
    enable_legacy_ssl_storage_port: true

Note: The parameter ‘enable_legacy_ssl_storage_port‘ is only needed as long as your cluster has nodes running in version 3.x. Later you ignore this param. Which is typically only during the migration phase.

If you have completed the settings you can start the node again in version 4.0.6.

Java – DataStax Driver

If you have a Java client using the DataStax Java Driver to connect to your Cassandra Cluster make sure hat you use the latest Driver verson:

<!-- DataStax Java Driver -->
<dependency>
	<groupId>com.datastax.cassandra</groupId>
	<artifactId>cassandra-driver-core</artifactId>
	<!-- for cassandra 4.0 use 3.11.3 or later -->
	<version>3.11.3</version>
	<scope>compile</scope>
</dependency>

Firewall

If you are running a firewall as explained in my last post you need take care about the new port settings. Port 7001 should no longer be needed.

Setup a Public Cassandra Cluster with Docker

UPDATE: I updated this origin post to the latest Version 4.0 of Cassandra.

In one of my last blogs I explained how you can setup a cassandra cluster in a docker-swarm. The advantage of a container environment like docker-swarm or kubernetes is that you can run Cassandra with its default settings and without additional security setup. This is because the cluster nodes running within a container environment can connect securely to each other via the kubernetes or docker-swarm virtual network and need not publish any ports to the outer world. This kind of a setup for a Cassandra cluster can be fine for many cases. But what if you want to setup a Cassandra cluster in a more open network? For example in a public cloud so you can access the cluster form different services or your client? In this case it is necessary to secure your Cassandra cluster.

Continue reading “Setup a Public Cassandra Cluster with Docker”

How To Install a Fast & Small Eclipse IDE

Eclipse 2022-09 has just been launched these days and this is a good moment to talk about the IDE and the installation.

I always have the problem that my Eclipse IDE is polluted with countless plugins that I do not use and that make working with Eclipse rather tougher. So in this blog I will shortly give my recommendation how to install a fast & small Eclipse IDE.

Continue reading “How To Install a Fast & Small Eclipse IDE”

Are You Crazy Still Using JSF!

JSF stands for Java-Server-Faces which is a web technology that is underestimated by many people. I wonder why is that? And are you actually crazy if you still use JSF? I don’t think so and I will share some of my thoughts. (To get an impression about the technology see the Getting Started Blog form Rudy De Busscher.)

Jakarta EE and Java Server Faces 4.0

The Specification

First of all JSF is a specification which is an important advantage over all the other technologies that JSF is usually compared to. The specification is an open process that is accompanied by many developers for years to define a general solid standard for web applications. This specification process has recently taken place in the Eclipse Foundation, which sets up rules that follow very high quality standards. This is one of the biggest advantages, as it guarantees that your web application is build on a solid core. Of course, other web frameworks also have large communities, but often these are represented by a single company that does not always take the developers into account. Angular from Google is just one example.

Server Based Framework

But back to JSF. Why is JSF sometimes ridiculed as an outdated technology? I personally started with JSF in its early beginning in 2001. So it’s obviously an old technology. And I am an old developer too ;-). In the meantime, many other web frameworks have evolved. Many of them are based on JavaScript and follow the single page paradigm (SPA) running in the browser. The idea is to move the application logic into the browser to get applications faster. This idea came up at a time when not everyone was satisfied with JSF and JavaScript took off. Java Server Faces – as the name implies – in contrast is a server based framework. This means the application logic is executed on the server. And this is where we have the big difference. At that time it was a valid argument to reduce the load on the servers. And of course, this should still be a desirable goal today. But the people who use this as an argument against JSF are often the same ones who rave about serverless functions. Therefore, I don’t think we should consider a server-based framework as a stupid idea today.

Self-Contained Microservice

In fact, a server-based web framework offers some advantages. In this way, application logic and business logic can be easily coupled. This is achieved within Jakarta EE mainly through CDI, EJB and JPA. These technologies are the backend for JSF components. Jakarta EE provides a very clear understanding of the separation of layers and JSF fits seamlessly into this concept. The server-side implementation also hides application details from the client. In contrast in a JavaScript SPA, large parts of the application logic are usually unprotected in the browser which can be a security risk in some cases.

So rendering the application logic on the server side makes it more easy for developers responsible for both – the backend and the frontend part. And this leads to the fact that applications can be developed faster in many cases. From the point of view of the microservice architecture, a JSF application corresponds to the principle of the Self-Contained Services. This is a pattern which is commonly used in microservice architectures.

Simplicity

Since JSF is often referred to as complex and clumsy, I recently wondered if this is really true. I migrated one of my Vue.js SPA to JSF 4.0 and compared the complexity of the resulting code. So finally, I would like to show in an example the simplicity of JSF. (Note: I do not compare JavaScript code with Java here)

In JavaScript Frameworks you typically bind your HTML tags to some code by additional tagging.

<!-- Vue.js -->
<input type="text" 
       name="userwebsite" 
       v-model="userwebsite" 
       placeholder="enter your website">
Your Website is: <span>{{api}}</span>

The SPA framework resolve the tag (v-model) and place the correct value from your model written in JavaScript. It also binds the input field to track changes of your model.

The JSF counter part looks not much different:

<!-- JSF -->
<h:inputText value="#{userController.website}" 
             pt:placeholder="enter your website" />
Your Website is: <h:outputText value="#{userController.website}" />

Here you also bind your model values to input fields or output text. But the code is executed in Java on the server side which is often equal to your backend code written in Java also for SPAs.

Ajax is used by JavaScript Web frameworks out of the box. So you usually have no need to care about it. In the example above the span tag is automatically updated when the value of the input changes.

In JSF you use also Ajax, but you have more control about how it behaves. You can simple enclose a part of your page with a f:ajax tag to get the same behaviour:

<!-- JSF -->
<f:ajax>
   <h:inputText value="#{userController.website}" 
                pt:placeholder="enter your website" />
   Your Website is: <h:outputText value="#{userController.website}" />
<f:ajax>

Another example is linking. In a JavaScript framework you use again a kind of tagging to initiate a rendering life-cycle when the user clicks on an element:

<!-- Vue.js -->
<li class="nav-item" v-on:click="showSection('search')">
   <label>Search</label>
</li>

The showSection implements some business logic in your JavaScript code and is responsible for handling data and changing the layout.

JSF is not much different:

<!-- JSF -->
<li class="nav-item">
   <h:link outcome="search" >
     <label>Search</label>
</li>

The h:link tag loads a new page from the backend named ‘search.xhtml’ containing the new layout. All the model binding is handled behind the scene in the backend. For the user there is no different in behaviour.

So as you can see from the markup, there is not much difference and it is not more or less complex to write a JSF frontend as it is in JavaScript based Web frameworks.

Conclusion

My personal conclusion is that JSF gives me a clearer and more consistent concept for writing my code within a framework. Backend logic and application design are combined in one technology resulting in a pattern also known as self-contained microservice.

To me, this is a valid concept even for modern web applications. And computing application logic on the server side is not crazy at all.

With JSF Version 4.0 you will find a modern and well designed Web technology embedded into the latest Jakarta EE 10 framework.

Fix Eclipse

Eclipse is an IDE providing a lot of tools to develop software in various languages. Installing Eclipse is just download the latest version and unzipping it somewhere in your home folder.

But Eclipse comes with some non optimal default settings. So it’s better to fix these settings before you start coding.

Change the Memory Settings

First open the eclipse.ini file located in your install folder. You will find the following memory settings:

-Xms256m
-Xmx2048m

This settings are typical to low if you have 8G or more memory available in your notebook. So double both values first:

-Xms512m
-Xmx4g

Change the Workspace Settings

After your launched Eclipse you should change some other default settings in the preference dialog. Open Window -> Preferences and change the following defaults:

General -> Workspace: change the ‘Text file encoding’ to UTF-8

General -> Workspace : change ‘New Text file line delimiter’ to Unix