Since Microsoft has announced that access to Outlook IMAP mailboxes with Basic authentication will soon no longer be possible, it is time to change many ‘older’ Java implementations. The following code example shows how to access outlook.office365.com with OAuth2 :
Continue reading “How to access outlook.office365.com IMAP form Java with OAUTH2”Setup a Public Cassandra Cluster with Docker
UPDATE: I updated this origin post to the latest Version 4.0 of Cassandra.
In one of my last blogs I explained how you can setup a cassandra cluster in a docker-swarm. The advantage of a container environment like docker-swarm or kubernetes is that you can run Cassandra with its default settings and without additional security setup. This is because the cluster nodes running within a container environment can connect securely to each other via the kubernetes or docker-swarm virtual network and need not publish any ports to the outer world. This kind of a setup for a Cassandra cluster can be fine for many cases. But what if you want to setup a Cassandra cluster in a more open network? For example in a public cloud so you can access the cluster form different services or your client? In this case it is necessary to secure your Cassandra cluster.
Continue reading “Setup a Public Cassandra Cluster with Docker”Build Your Own Modelling Tool with Eclipse GLSP
Eclipse GLSP is a new graphical language server platform allowing you to build powerful and highly adoptable modelling tools. Like many modern modelling frameworks it is based on Node.js and runs in a web browser. But unlike many other modelling tools, Eclipse GLSP takes a much broader approach. It forces the strict separation between the graphic modelling and the underlying model logic. With this concept Eclipse GLSP can not only be integrated in different tooling platforms like Eclipse Theia, Microsoft VS Code or the Eclipse desktop IDE, it also allows any kind of extension and integration within such platforms. On the project homepage you can find a lot of examples and videos demonstrating the rich possibilities.

Jakarta EE8, EE9, EE9.1. …. What???
Jakarta EE is the new Java Enterprise platform as you’ve probably heard. There is a lot of news about this application development framework and also about the rapid development of the platform. Version 9.1 was released in May last year and version 10 is already in a review process. But what does this mean for my own Java project? Because I was also a bit confused about the different versions, hence my attempt to clarify things.

Is Spring Boot Still State of the Art?
In the following blog post I want to take a closer look at the question if the application framework Spring Boot is still relevant in a modern Java based application development. I will take a critical look against its architectural concept and compare it against the Jakarta EE framework. I am aware of how provocative the question is and that it also attracts incomprehension. Comparing both frameworks I am less concerned about the development concept but more with the question about runtime environments.

Both – Spring Boot and Jakarta EE – are strong and well designed concepts for developing modern Microservices. When I am talking about Jakarta EE and Microservices I always talk also about Eclipse Microprofile which is today the de-facto standard extension for Jakarta EE. Developing a Microservice the concepts of Spring Boot and Jakarta EE are both very similar. The reason is, that a lot of technology of today’s Jakarta EE was inspired by Spring and Spring Boot. The concepts of “Convention over Configuration“, CDI or the intensive usage of annotations were first invited by Spring. And this is proof of the innovative power of Spring and Spring Boot. But I believe that Jakarta EE is today the better choice when looking for a Microservice framework. Why do I come to this conclusion?
Continue reading “Is Spring Boot Still State of the Art?”Migrating to Jakarta EE 9
In this blog post I will document the way, we at Imixs-Workflow migrated from Java EE to Jakarta EE 9. The Java Enterprise Stack has always been known for providing a very reliable and stable platform for developers. We at Imixs started with Java EE in the early beginnings in the year 2003. At that time Java EE was not comparable to the platform we know today. For me the most impressive part of the journey with Java EE over the last 17 years was the fact, that you can always trust on the platform. Even if new concepts and features where introduced, your existing code worked. For a human-centric workflow engine, like our open source project Imixs-Workflow, this is an important aspect. A workflow engine have to be sustainable. A long running business process my take years from its creation to its final state. An insurance process is one example of this kind of a business process. I personally run customer projects, started running Imixs-Workflow on Glassfish, switched to JBoss, migrated to Payara and run today on Wildfly. Upgrading the Java EE version and switching the server platform was never something special about which you had to write a lot. But with Jakarta EE9 the situation changed dramatically.
Continue reading “Migrating to Jakarta EE 9”VisualVM & Wildfly running in Docker
In Imixs-Workflow project we use mostly use Wildfly Server to run the Imixs-Worklfow engine. If you want to profile your workflow instance in details you can use the VisualVM profiling tool. To use this tool when running Wildfly in a container will be the topic of this blog post. You can download VisualVM form Github.
When running Wildfly in a container you need to use the remote profile capabilities of VIsualVM to analyse your services. There for your wildfly server running in a docker container should publish the port 9990 which is also the port for the Wildfly Web Interface. Using the Imixs Wildfly Docker image you can simply launch your server with the option “DEBUG=true”.
Next you need to download the wildfly version running in your container into your local workstation as you need some libraries only contained in the corresponding wildfly version. Go to the Wildfly Download page to download the version your are running in your container.
Lets assume you have extracted the wildfly server packages into the following directory
$ /opt/wildfly-18.0.0.Final
than you can start VisualVM with the following option:
$ ./visualvm -cp:a /opt/wildfly-18.0.0.Final/bin/client/jboss-cli-client.jar -J-Dmodule.path=/opt/wildfly-18.0.0.Final/modules
Take note of the correct server path.
Now you can connect to your wildfly server with a new JMX Connection which you can open from the ‘file’ menu in VisualVM

To connec to to use the following URL:
service:jmx:remote+http://0.0.0.0:9990
Note that you may need a admin user account on your wildfly server. If you are unsure open your wildfly web console first form a web browser:
http://0.0.0.0:9990
ManagedScheduledExecutorService vs EJB Timer
Over the past years I always used EJB Timer Service to implement scheduled tasks in my Java Enterprise applications. Since Java EE7 the ManagedScheduledExecutorService is a new pattern to implement a scheduler service. The ManagedScheduledExecutorService is part of the SE ScheduledExecutorService and provides methods for submitting delayed or periodic tasks for execution.
Implementing a ManagedScheduledExecutorService is quite simple. See the following example:
@Startup
@Singleton
@LocalBean
public class MyScheduler {
@Resource
ManagedScheduledExecutorService scheduler;
@Inject
MyService myService;
@PostConstruct
public void init() {
this.scheduler.scheduleAtFixedRate(this::run, 500, 500,
TimeUnit.MILLISECONDS);
}
public void run() {
myService.processSomething();
}
}
In compare to a EJB Timer it seems to be quite simple to use this pattern. But the ManagedScheduledExecutorService is more a lightweight scheduling framework and it does not support features like transaction support, full lifecycle operations (create, read, cancel timers) which are supported by EJB Timers. In addition EJB Timers can be persisted and so survive server crash and restart. And in fact I personally run into a problem with execution exceptions during a redeployment scenario in Wildfly a few days ago. So is a EJB Timer an outdated technology just because it’s an EJB?
The Advantage and Restrictions of EJB Timers
In the early beginning of my Java EE career I learned that EJB timers are persisted an managed by the ejb container on the application server level. This ensures that the timer is executed correctly without conflicts in scenarios with multiple threads. This means even in a clustered environment, a persistent EJB timer runs only in one cluster member which might not necessarily be the same cluster member it was created in. Since we are today mostly talking about horizontally scalable applications spread across multiple servers, this seems to be a restriction. And this was also my first thought when I switched from EJB Timer to ManagedScheduledExecutorService.
But on the other hand, that’s the common expectation for a timer at a specific point to fire only at one of the nodes in order to avoid duplication. For example, you might probably do not want to send out meeting notices twice from different nodes. So the idea that a persisted EJB Timer runs only in one instance even in a large cluster environment can be an important feature and not a restriction.
Non-Persistent EJB Timers
Since EJB 3.1 specification there is a variant of non-persistent EJB Timers. Non-persistent timers have similar semantics and behaviour as the origin persistent timers, but without the overhead of a data store. This means they have a different life cycle and are easier to use than persistent timers. Non-persistent timers are active only while the application server is active and are not maintained across application server crashes, shutdowns and restarts. But in difference to the ManagedScheduledExecutorService the non-persistent EJB Timer is transactional during the creation and cancellation which can be important for many scenarios. If a timer is created within a transaction and that transaction is later rolled back, the creation of the timer is rolled back as well. Similar rules apply to the cancellation of a timer.
This is an example how a EJB Timer can be implemented:
@Singleton
public class MyTimerService {
@EJB
MyService myService;
@Schedule(second="*/1", minute="*",hour="*", persistent=false)
public void doWork(){
myService.processSomething();
}
}
In a clustered environment a non-persistent timer runs in each cluster member that it was created in. And a automatic non-persistent timers run in each cluster member that contains the EJB. So this means the non-persistent EJB Timer scales horizontal within a clustered environment – e.g. a Kubernetes cluster. More details about the EJB Timer variants can be found here.
Conclusion
So we have seen how ManagedScheduledExecutorService and EJB Timers can be used to implement scheduled tasks in Jakarta EE. In my personal opinion you should use EJB timers if you are running on a Jakarta EE stack. The EJB Timer provides you with more features and is even scalable as the more lightweight ManagedScheduledExecutorService. This is just my personal opinion. Choose the technology that best fits your app.
Microprofile OpenAPI and Swagger UI
With the Eclipse-Microprofile framework you can develop microservices quite easy. One of the build-in functionalities is the support for the OpenAPI standard. This means your REST services will automatically exposed in a OpenAPI format. For example on a Payara-Micro Server a rest service resource /api/training/ may look like this:
openapi: 3.0.0
info:
title: Deployed Resources
version: 1.0.0
servers:
- url: http://localhost:8080
paths:
/api/training:
post:
operationId: getSomeData
requestBody:
content:
application/xml:
schema:
$ref: '#/components/schemas/XMLConfig'
responses:
default:
description: Default Response.
content:
application/xml:
schema:
type: object
components:
schemas:
XMLConfig:
....
You can request the OpenAPI resource form your server running your REST service:
http://localhost:8080/openapi
Swagger UI
The Swagger UI is a web interface which can be used to interact with your REST API providing the OpenAPI standard. This is a nice feature, with is for example a build-in functionality from OpenLiberty. But also on other Microprofile Servers like Wildfly or Payara you can add the Swagger UI easily. Just add the following maven dependency into your microservice:
....
<dependency>
<groupId>org.microprofile-ext.openapi-ext</groupId>
<artifactId>openapi-ui</artifactId>
<version>1.1.3</version>
</dependency>
...
This will automatically activate the swagger web UI. To access the UI from your web browser just open the resource /openapi-ui/
http://localhost:8080/api/openapi-ui/

Docker
If you are running your service in Docker, which is likely to be the case in most projects, you will need to overwrite the Docker internal host name. Within Eclipse Microprofile, this is very easy via the config API. You simply need to set the following environment variable:
version: "3.3"
services:
my-service:
image: ....
environment:
MP_OPENAPI_SERVERS: "http://localhost:8080"
....
You can also add multiple server instances by seperating with comma. You will find a complete a list of configurable items for the OpenAPI in the MicroProfile OpenAPI Specification.
OpenLiberty – Performance
In the course of our open source project Imixs-Office-Workflow, I have now examined OpenLiberty in more detail. And I came up to the conclusion that OpenLiberty has a very impressive performance.
Docker
I run OpenLiberty in Docker in the version ‘20.0.0.3-full-java8-openj9-ubi’. Our application is a full featured Workflow Management Suite with a Web Interface and also a Rest API. So for OpenLiberty we use the following feature set:
...
<featureManager>
<feature>javaee-8.0</feature>
<feature>microProfile-2.2</feature>>
<feature>javaMail-1.6</feature>
</featureManager>
...
As recommended by OpenLiberty I use the following Dockerfile layout:
FROM openliberty/open-liberty:20.0.0.3-full-java8-openj9-ubi
# Copy postgres JDBC driver
COPY ./postgresql-9.4.1212.jar /opt/ol/wlp/lib
# Add config
COPY --chown=1001:0 ./server.xml /config/server.xml
# Activate Debug Mode...
# COPY --chown=1001:0 ./jvm.options /config/
# Copy sample application
COPY ./imixs-office-workflow*.war /config/dropins/
RUN configure.sh
The important part here is the RUN command at the end of the Dockerfile. This script adds the requested XML snippets and grow image to be fit-for-purpose. This makes the docker build process a little bit slower, but the startup of the image is very fast.
I measured a startup time of round about 12 seconds. This is very fast for the size and complexity of this application. And it is a little bit faster than the startup of Wildfly with round about 15 seconds. Only in case of a hot-redeploy of the application Wildfly seems to be a little bit faster (6 seconds) in compare to OpenLiberty (8 seconds).
Open Liberty | Wildfly | |
Docker Startup Time | 12 sec | 15 sec |
Application Hot Deploy | 8 sec | 6 sec |
Debug Mode
Note: activating the debug port makes OpenLiberty performance very poor. So do not forget to deactivate debugging in productive mode! The debug mode can be activated by providing a jvm.options file like this:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=7777
I have commented on this in the Dockerfile example above.