Why should I Comment my Code?

Today I want to write about why comments are so important in software development and how you can improve them. For me, the most disappointing thing about good source code is when it is insufficiently commented. And this happens with closed code in the same way as with Open Source code. In the worst case, and this is usually the most common, source code is not commented at all. Especially in open source projects, this is a disdain against the community reading your code. But even in internal software projects, I consider missing comments to be a bad behaviour towards your colleagues, which should be avoided.

Of course, we all know the funny answers in this context. “Good code doesn’t need any comments” or “Take a look into the source code if you don’t understand something.” Is it laziness or ignorance not to write comments? No, I believe that many developers simply do not know how to write good comments. For this reason, I would like to explain here a very simple rule how to really improve your source code through comments.

Don’t Explain me the What and How!

The biggest mistake you can make when writing comments is assuming you have to explain what you are doing. In this case, you write comments that explain the obvious.

/**
 * Builder Class to build Events
 */
public class EventBuilder { 
 ....
}

As long as the name of a class or method is not absolutely meaningless, a name should explain the usage of a class or method. So please do not explain that a Builder Class builds an object or a getter method returns an object. Writing such a comment is a waste of time. By the way, the same applies to those who have to read this kind of useless comments.

A second example for bad comments is explaining how the code works.

/**
 * Build the event and return it.
 */
public T build() {
   ....
   return event;
}

Of course, you shouldn’t assume that the users reading your code are complete idiots. You can assume that other developers can code just as well as you do. And explaining software patterns in the comment of a method or class is pointless.

So now we have seen what bad comments are. But what does a good comment look like?

Start with Why!

There is a great book from Simon Sinek with the Title “Start with Why: How Great Leaders Inspire Everyone to Take Action“. The core message of this book is, that you always should start by explaining the ‘Why’. The ‘What’ and ‘How’ are usually explaining the things later in more detail. For software developing the ‘What’ and ‘How’ is in deed in the code and needs no comments in many cases.

So starting with the ‘Why’ can make source code comments much more useful – for yourself and for others. Starting with ‘Why’ makes you think about what you’re actually doing. So going back to my previous examples a comment may look like this:

/**
 * When using Events in the client, the EventBuilder may help 
 * to construct them in the correct way more easily. 
 */
public class EventBuilder { 
 ....
}

So instead of describing what the class is, answer the question why you have invented it. The same is true for methods. As explained before if I want to understand the ‘How’, than I can read the source code in detail. But why does the class or method exist? This is what a comment is for:

/**
 * To display the summary of an invoice in the client, 
 * this method calculates the summary including all taxes.
 */
public float summarizeInvoice() {
   ....
   return event;
}

I don’t want to go into more details. It’s obvious that the better you explain why you have written a class or method, it helps others like yourself to understand your code. So when you start writing your next code, think about why you’re doing this and comment that into your class or method header. You’ll see – your colleagues and your community will love it and will much prefer using your code.

Setup a Public Cassandra Cluster with Docker

UPDATE: I updated this origin post to the latest Version 4.0 of Cassandra.

In one of my last blogs I explained how you can setup a cassandra cluster in a docker-swarm. The advantage of a container environment like docker-swarm or kubernetes is that you can run Cassandra with its default settings and without additional security setup. This is because the cluster nodes running within a container environment can connect securely to each other via the kubernetes or docker-swarm virtual network and need not publish any ports to the outer world. This kind of a setup for a Cassandra cluster can be fine for many cases. But what if you want to setup a Cassandra cluster in a more open network? For example in a public cloud so you can access the cluster form different services or your client? In this case it is necessary to secure your Cassandra cluster.

Continue reading “Setup a Public Cassandra Cluster with Docker”

Are You Crazy Still Using JSF!

JSF stands for Java-Server-Faces which is a web technology that is underestimated by many people. I wonder why is that? And are you actually crazy if you still use JSF? I don’t think so and I will share some of my thoughts. (To get an impression about the technology see the Getting Started Blog form Rudy De Busscher.)

Jakarta EE and Java Server Faces 4.0

The Specification

First of all JSF is a specification which is an important advantage over all the other technologies that JSF is usually compared to. The specification is an open process that is accompanied by many developers for years to define a general solid standard for web applications. This specification process has recently taken place in the Eclipse Foundation, which sets up rules that follow very high quality standards. This is one of the biggest advantages, as it guarantees that your web application is build on a solid core. Of course, other web frameworks also have large communities, but often these are represented by a single company that does not always take the developers into account. Angular from Google is just one example.

Server Based Framework

But back to JSF. Why is JSF sometimes ridiculed as an outdated technology? I personally started with JSF in its early beginning in 2001. So it’s obviously an old technology. And I am an old developer too ;-). In the meantime, many other web frameworks have evolved. Many of them are based on JavaScript and follow the single page paradigm (SPA) running in the browser. The idea is to move the application logic into the browser to get applications faster. This idea came up at a time when not everyone was satisfied with JSF and JavaScript took off. Java Server Faces – as the name implies – in contrast is a server based framework. This means the application logic is executed on the server. And this is where we have the big difference. At that time it was a valid argument to reduce the load on the servers. And of course, this should still be a desirable goal today. But the people who use this as an argument against JSF are often the same ones who rave about serverless functions. Therefore, I don’t think we should consider a server-based framework as a stupid idea today.

Self-Contained Microservice

In fact, a server-based web framework offers some advantages. In this way, application logic and business logic can be easily coupled. This is achieved within Jakarta EE mainly through CDI, EJB and JPA. These technologies are the backend for JSF components. Jakarta EE provides a very clear understanding of the separation of layers and JSF fits seamlessly into this concept. The server-side implementation also hides application details from the client. In contrast in a JavaScript SPA, large parts of the application logic are usually unprotected in the browser which can be a security risk in some cases.

So rendering the application logic on the server side makes it more easy for developers responsible for both – the backend and the frontend part. And this leads to the fact that applications can be developed faster in many cases. From the point of view of the microservice architecture, a JSF application corresponds to the principle of the Self-Contained Services. This is a pattern which is commonly used in microservice architectures.

Simplicity

Since JSF is often referred to as complex and clumsy, I recently wondered if this is really true. I migrated one of my Vue.js SPA to JSF 4.0 and compared the complexity of the resulting code. So finally, I would like to show in an example the simplicity of JSF. (Note: I do not compare JavaScript code with Java here)

In JavaScript Frameworks you typically bind your HTML tags to some code by additional tagging.

<!-- Vue.js -->
<input type="text" 
       name="userwebsite" 
       v-model="userwebsite" 
       placeholder="enter your website">
Your Website is: <span>{{api}}</span>

The SPA framework resolve the tag (v-model) and place the correct value from your model written in JavaScript. It also binds the input field to track changes of your model.

The JSF counter part looks not much different:

<!-- JSF -->
<h:inputText value="#{userController.website}" 
             pt:placeholder="enter your website" />
Your Website is: <h:outputText value="#{userController.website}" />

Here you also bind your model values to input fields or output text. But the code is executed in Java on the server side which is often equal to your backend code written in Java also for SPAs.

Ajax is used by JavaScript Web frameworks out of the box. So you usually have no need to care about it. In the example above the span tag is automatically updated when the value of the input changes.

In JSF you use also Ajax, but you have more control about how it behaves. You can simple enclose a part of your page with a f:ajax tag to get the same behaviour:

<!-- JSF -->
<f:ajax>
   <h:inputText value="#{userController.website}" 
                pt:placeholder="enter your website" />
   Your Website is: <h:outputText value="#{userController.website}" />
<f:ajax>

Another example is linking. In a JavaScript framework you use again a kind of tagging to initiate a rendering life-cycle when the user clicks on an element:

<!-- Vue.js -->
<li class="nav-item" v-on:click="showSection('search')">
   <label>Search</label>
</li>

The showSection implements some business logic in your JavaScript code and is responsible for handling data and changing the layout.

JSF is not much different:

<!-- JSF -->
<li class="nav-item">
   <h:link outcome="search" >
     <label>Search</label>
</li>

The h:link tag loads a new page from the backend named ‘search.xhtml’ containing the new layout. All the model binding is handled behind the scene in the backend. For the user there is no different in behaviour.

So as you can see from the markup, there is not much difference and it is not more or less complex to write a JSF frontend as it is in JavaScript based Web frameworks.

Conclusion

My personal conclusion is that JSF gives me a clearer and more consistent concept for writing my code within a framework. Backend logic and application design are combined in one technology resulting in a pattern also known as self-contained microservice.

To me, this is a valid concept even for modern web applications. And computing application logic on the server side is not crazy at all.

With JSF Version 4.0 you will find a modern and well designed Web technology embedded into the latest Jakarta EE 10 framework.

OnlyOffice and WOPI

This blog post includes some tips and tricks how to use WOPI protocol in combination with OnlyOfice.

OnlyOffice is a powerful collaborative web based document editor. OnlyOffice supports most of the common office document formats for editing Spredsheets, Documents, or Presentations.

WOPI is a standard web protocol for integrating web based document editors into a Web application. It was initially developed by Microsoft and is adapted from different Web Editors like OnlyOffce, Collabora and of course Microsoft Office. OnlyOffice supports the WOPI standard since version 6.4. Information about using WOPI API with OnlyOffice can be found here.

Continue reading “OnlyOffice and WOPI”

Wildfly – Elytron – LDAP SecurityDomains for Active Directory

In one of my last blog posts I explained how to setup a Security Domain in Wildfy Elytron – the new security module. In this blog post I explain how to setup a LDAP security domain for the Active Directory:

The ldap-realm

As explained in my last blog you have to define a security-domain and a security-realm in two separate sections. The following example shows the LDAP configuration to resolve users and roles form an Active Directory. I have reduced the non relevant parts:

       <subsystem xmlns="urn:wildfly:elytron:14.0" final-providers="combined-providers" disallowed-providers="OracleUcrypto">
            .....
            <security-domains>
                .....                
              	<!-- My LDAP domain   -->
		<security-domain name="mydomain" default-realm="cached-ldap" permission-mapper="default-permission-mapper">
		   <realm name="cached-ldap"/>
		</security-domain>				                
            </security-domains>
            <security-realms>
                .....                
                <!-- my LDAP realm -->
		<ldap-realm name="ldap-realm" dir-context="ldap-connection" direct-verification="true">
			<identity-mapping rdn-identifier="sAMAccountName" use-recursive-search="false" search-base-dn="CN=users,DC=intern,DC=foo,DC=de" >
			  <attribute-mapping>
			   <attribute from="CN" to="Roles" filter="(member={1})" filter-base-dn="CN=users,DC=intern,DC=foo,DC=de"/>
			  </attribute-mapping>
			</identity-mapping>
		</ldap-realm>
		<caching-realm  name="cached-ldap" realm="ldap-realm"/>			    
            </security-realms>
            
            <!-- LDAP Dir Contexts -->
            <dir-contexts>
		<dir-context name="ldap-connection" url="ldap://my-ldap:389" principal="CN=bind_user,CN=users,DC=intern,DC=foo,DC=de">
		     <credential-reference clear-text="YOUR-PASSWORD"/>
		</dir-context>
    	    </dir-contexts>
.....
       

You have to tweak the dir-context and the base-dn in the example above to your LDAP settings. The setup searches for the sAMAccountName as the UserID and searches the roles in the ‘members’ attribute of the user entry.

The cached-ldap

The important part is the ‘cached-ldap‘ security realm. In older versions of Wildfly the ldap realm uses a cache per default. In the new version you need to define a cache by yourself. This is what the cached-ldap is good for. Your security domain points to the cached-ldap and the cached-ldap points to the ldap realm. If you don’t use this, you will see a lot of ldap requests against your directory.

You can also add attributes to setup the default caching like:

<caching-realm name="cached-ldap" realm="ldap-realm" maximum-age="300000" />

Find details here.

Logging

For debugging it is helpful if you change the loglevel for org.wildfly.security. For this you simply add the following logger into the subsystem logging:

        <logger category="org.wildfly.security">
	    <level name="DEBUG"/>
	</logger>

And also set the log level from “INFO” to “DEBUG” for your console handler. This setting will give you more insights of what is happening in the background.

From a server bash you can ‘tail’ the security messages like this:

$ tail -f /opt/jboss/wildfly/standalone/log/server.log  | grep "security"

Role Mapping

In some cases it maybe necessary to map LDAP Group names to specific role names within your application. There for you can use the mappers. See the following example which maps the LDAP Group name ‘imixs_users’ to the application role ‘org.imixs.ACCESSLEVEL.AUTHORACCESS’:

            <security-domains>
               ....
		<security-domain name="imixsrealm" default-realm="cached-ldap" permission-mapper="default-permission-mapper">
		 <realm name="cached-ldap" role-mapper="imixs-user-rolemapper" />
		</security-domain>	                
            </security-domains>
.....
            <mappers>
                .....              
                <regex-role-mapper name="imixs-user-rolemapper" pattern="imixs_user" replacement="org.imixs.ACCESSLEVEL.AUTHORACCESS" keep-non-mapped="true"/>
            </mappers>
....

The mapper is referred in the security-domain. I am using a regex role mapper to replace the role name. You will find more background about this role mapping here.

Jakarta EE8, EE9, EE9.1. …. What???

Jakarta EE is the new Java Enterprise platform as you’ve probably heard. There is a lot of news about this application development framework and also about the rapid development of the platform. Version 9.1 was released in May last year and version 10 is already in a review process. But what does this mean for my own Java project? Because I was also a bit confused about the different versions, hence my attempt to clarify things.

Continue reading “Jakarta EE8, EE9, EE9.1. …. What???”

Wildfly Running on Docker with Custom Setup

WildFly is a modular & lightweight application server based on the Jakarta EE standard.

You can start a Wildfly Sever with the help of Docker quite simple. To start a blank wildfly server in its standard configuration run:

$ docker run -it quay.io/wildfly/wildfly

Bundle Your Application

If you want to bundle your own application you just need a simple Docker file like this:

FROM quay.io/wildfly/wildfly:26.0.0.Final
COPY ./target/*.war /opt/jboss/wildfly/standalone/deployments/

This will copy your web application or microservice from a maven project into the Wildfly deployment directory. The applicaton will be be automatically deployed during startup.

Wildfly & Microprofile

Wildfly comes with different configuration profiles (e.g. cluster mode, full version, microprofile). These configuration files are located in the server directory /opt/jboss/wildfly/standalone/configuration

To start your Wildfly container with one of these configurations – for example with the Eclipse Microprofiles you just need to add a parameter to the startup command, specifying the configuration profile:

FROM quay.io/wildfly/wildfly:26.0.0.Final
COPY ./target/*.war /opt/jboss/wildfly/standalone/deployments/
# Run with microprofiles
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "-c","standalone-microprofile.xml"]

This Dockerfile will now activate the standalone-microprofile.xml configuration during next startup.

Custom Configurations

Of course you can also customize your configuration. For this purpose just copy one of the standalone-*.xml Files form a running container to your host:

$docker cp [CONTAINER_ID]:/opt/jboss/wildfly/standalone/configuration/standalone.xml ./my-standalone.xml

Replace [CONTAINER_ID] with the ID of your running wildfly container.

Next you can edit the standalone.xml file and add your project specific configurations. Finally add you custom file to the container Image via the COPY command in your Dockerfile:

FROM quay.io/wildfly/wildfly:26.0.0.Final
COPY .my-standalone.xml /opt/jboss/wildfly/standalone/configuration/
COPY ./target/*.war /opt/jboss/wildfly/standalone/deployments/
# Run with custom configuration
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "-c","my-standalone.xml"]

That’s it!

Is Spring Boot Still State of the Art?

In the following blog post I want to take a closer look at the question if the application framework Spring Boot is still relevant in a modern Java based application development. I will take a critical look against its architectural concept and compare it against the Jakarta EE framework. I am aware of how provocative the question is and that it also attracts incomprehension. Comparing both frameworks I am less concerned about the development concept but more with the question about runtime environments.

Both – Spring Boot and Jakarta EE – are strong and well designed concepts for developing modern Microservices. When I am talking about Jakarta EE and Microservices I always talk also about Eclipse Microprofile which is today the de-facto standard extension for Jakarta EE. Developing a Microservice the concepts of Spring Boot and Jakarta EE are both very similar. The reason is, that a lot of technology of today’s Jakarta EE was inspired by Spring and Spring Boot. The concepts of “Convention over Configuration“, CDI or the intensive usage of annotations were first invited by Spring. And this is proof of the innovative power of Spring and Spring Boot. But I believe that Jakarta EE is today the better choice when looking for a Microservice framework. Why do I come to this conclusion?

Continue reading “Is Spring Boot Still State of the Art?”

Ceph Pacific running on Debian 11 (Bullseye)

In this tutorial I will explain how to setup a Ceph Cluster on Debian 11. The Linux Distribution is not as relevant as it sounds but for the latest Ceph release Pacific I am using here also the latest Debian release Bullseye.

In difference to my last tutorial how to setup Ceph I will focus a little bit more on network. Understanding and configuring the Ceph network options will ensure optimal performance and reliability of the overall storage cluster. See also the latest configuration guide from Red Hat.

Continue reading “Ceph Pacific running on Debian 11 (Bullseye)”

Monitoring Web Servers Should Never be Complex

If you run several web servers in your organisation or even public web servers in the internet you need some kind of monitoring. If your servers go down for some reason this may not be funny for your colleagues, customer and even for yourself. For that reason we use monitoring tools. And there are a lot of monitoring tools available providing all kinds of features and concepts. For example you can monitor the behaviour of your applications, the hardware usage of your server nodes, or even the network traffic between servers. One prominent solution is the open source tool Nagios which allows you to monitor hardware in every detail. In Kubernetes environments you may use the Prometeus/Grafana Operator, which integrates into the concept of Kubernetes providing a lot of different export services to monitor a cluster in various ways. And also there is a large market providing monitoring solutions running in the cloud. The cloud solutions advertise that no complex installation is required. But personally I wonder if it is a good idea to send application and hardware metrics to a third party service.

Continue reading “Monitoring Web Servers Should Never be Complex”