EJB => CDI Migration

In this blog post I will try to explain how to replace Jakrata EE EJBs with CDI beans. In onw of the future releases of Jakarta EE (possible version 12) the EJB concepts will be fully replaced by CDI technology. The reason simply is that EJBs become more and more outdated as the technology is based on older concepts that today are no longer recommended. Another goal for the replacement is to make developers life easier and not providing two very similar technologies in parallel. The Imixs-Workflow project is fully based on Jakarta EE and we are using also EJBs in some of its core components. So this will also be a kind of travel guide of my own journey from EJB to CDI.

The Basics

So first question: Why will EJBs be removed? The first and most obvious answer is: it does not make sens for the Jakarta EE project to support tow similar technologies in parallel. CDI is the newer technology and already today provides a lot of concepts from EJBs. So often in a Jakrata EE project you can either choose to implement a Service in a EJB or CDI bean without any difference in its result.

One of the more hidden reasons is that EJBs were invented at a time when the Java VM did not yet offer the performance and functionality that it does today. At that time, it was simply not efficiently possible to use a bean instance in a multi-threaded situation without running into a problem with the VMs garbage collector that it could no longer keep up cleaning old objects. The was the reason for the EJB Container and its pooling mechanism. That means in EJB a client always gets an EJB instance exclusive and can use it in a thread save way. If all EJBs from the pool are in use a new client request have to wait until one of the pools EJB instances is free again. This was and is a very robust and thread save mechanism and makes the developers life very easy. In a CDI Container we don’t have this kind of pooling and so the first result is the different code layout of CID implementations.

An EJB implementation typical looks like this:

package com.example;

import jakarta.ejb.EJB;
import jakarta.ejb.Stateless;
import jakarta.ejb.TransactionAttribute;
import jakarta.ejb.TransactionAttributeType;
import jakarta.persistence.EntityManager;
import jakarta.persistence.PersistenceContext;

@Stateless
public class StatelessBeanInEJB {

  @PersistenceContext
  private EntityManager entityManager;

  // The @TransactionAttribute(TransactionAttributeType.REQUIRED) // annotation is optional; this is the default already.
  public void transactionalMethod() {
   // ...
  }


  @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
  public void independentTransactionalMethod() {
   // ...
  }


}

Now this is how the same looks in CDI with help of the in Jakarta Transactions 2.0:

package com.example;

import jakarta.enterprise.context.ApplicationScoped;
import jakarta.persistence.EntityManager;
import jakarta.persistence.PersistenceContext;
import jakarta.transaction.Transactional;
import jakarta.transaction.Transactional.TxType;

@ApplicationScoped
public class StatelessBeanInCDI {

  @PersistenceContext
  private EntityManager entityManager;

  @Transactional // The annotation value TxType.REQUIRED is optional; this is the default already.
  public void transactionalMethod() {
   // ...
  }

  @Transactional(TxType.REQUIRES_NEW)
  public void independentTransactionalMethod() {
   // ...
  }


}

The CDI bean has been marked @ApplicationScoped and is no longer pooled. And the CDI instances are unsynchronized while EJB instances are synchronized.

Synchronized vs Unsynchronized – Was does this mean?

I’ll explain the important difference between synchronized EJB instances and non-synchronized CDI instances:

EJB (@Stateless) – synchronized:

  • With EJBs, each bean instance from the pool is only used by one thread at a time
  • The container automatically ensures this thread safety
  • If several threads want to access the bean at the same time, they have to fetch a free instance from the pool or wait

This makes implementation easier because you don’t have to worry about thread safety.
However, it can lead to performance degradation under high load because threads have to wait.

CDI (@ApplicationScoped) – unsynchronized:

  • A CDI Bean instance can be used by multiple threads in parallel
  • There is no automatic synchronization by the container
  • The developer is responsible for thread safety

This allows for better performance under high load, as no threads have to wait.
However this requires a more careful implementation to avoid race conditions.

Here is an example:

@ApplicationScoped
public class UnsynchronizedCounter {
    private int count = 0; // shared state

  // NOT thread-save!
  public void increment() {
  count++;  // can lead into a Race Condition
  }

  // Thread-save Version
  public synchronized void incrementThreadSafe() {
   count++;
  }
}

So with CDI, we have to pay attention to thread safety ourselves if the bean has shared state. Possible solutions are:

  • Using Synchronized Methods/Blocks
  • Use thread-safe data structures (e.g. AtomicInteger)
  • Working stateless
  • Use a narrower scope like @RequestScoped

The EJB version would automatically be thread-safe, but less performant under high load.

Using instance variables in stateless EJBs was always a very bad practice but is was possible. So if you have clean implementations of EJBs without using instance variables, on the first glance it should be easy to transfere your EJB into a CID bean by just replacing the annotation @Stateless with @ApplicationScoped.

But now let’s take a deeper look into the details….

… will be continued ….

How to Mock EJBs within Jakarta EE?

If you use Mockito as a Test Framework to test your Jakarta EE EJB classes this can be super easy or a horror. At least when you have some older code as the situation was when I run into an strange issue with NullPointerExceptions.

The point is that Mockito has changed massively between Version 4 and 5. And you’ll notice this when you just copy & paste a lot of test code from older projects. So first make sure that your Maven dependencies are up to date and you use at least Mocktio version 5.2.0

<dependency>
  <groupId>junit</groupId>
  <artifactId>junit</artifactId>
  <version>4.13.1</version>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>org.mockito</groupId>
  <artifactId>mockito-core</artifactId>
  <version>5.8.0</version>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>org.mockito</groupId>
  <artifactId>mockito-junit-jupiter</artifactId>
  <version>5.8.0</version> 
  <scope>test</scope>
</dependency>

As you can see I use not only mockito-core but also the new mocktio-junit-jupiter framework that we can use for testing more complex Java beans like EJBs.

If you test a simple pojo class you test code will still look like this:

package org.imixs.workflow.bpmn;

import java.util.List;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;

public class TestBPMNModelBasic {

	OpenBPMNModelManager openBPMNModelManager = null;

  @Before
  public void setup()  {
    openBPMNModelManager = new OpenBPMNModelManager();
  }

  @Test
  public void testStartTasks() {
    List<Object> startTasks = openBPMNModelManager.findStartTasks(model, "Simple");
    Assert.assertNotNull(startTasks);	
  }
}

This is a fictive test example from our Imixs-Workflow project. What you can see here is that I use a simple pojo class (OpenBPMNModelManger) that I create with the constructor in the setup method. And this works all fine!

But if you try the same with EJB you may possible fail early by creating the EJB mocks. However Mocktio supports you in this with the new mockito-junit-jupiter framework in version 5.x.

Take a look at the following example testing a Jakarta EE EJB:

package org.imixs.workflow.engine;

import org.imixs.workflow.bpmn.OpenBPMNModelManager;
import org.imixs.workflow.exceptions.ModelException;
import org.junit.Assert;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.junit.jupiter.MockitoExtension;

@ExtendWith(MockitoExtension.class)
public class TestModelServiceNew {

  @Mock
  private DocumentService documentService;

  @InjectMocks
  ModelService modelServiceMock;

  @BeforeEach
  public void setUp() {
    MockitoAnnotations.openMocks(this);
    when(documentService.load(Mockito.anyString())).thenReturn(new ItemCollection());
  }

  @Test
  public void testGetDataObject() throws ModelException {
   OpenBPMNModelManager openBPMNModelManager = modelServiceMock.getOpenBPMNModelManager();
   Assert.assertNotNull(openBPMNModelManager); 
  }
}

I am using here the Annotation @ExtendWith(MockitoExtension.class) to prepare my test class for testing more complex EJB code and I inject my EJB service as a mock with the annotation @InjectMocks. I also use the annotation @Mock here to inject additional dependency classes used by my service.

This all looks fine and it works perfect!

But there is one detail in my second example which can be easily overseen! The @Test annotation of my test method is now imported by the mockito jupiter framework and no longer form the core junit framework!

...
import org.junit.jupiter.api.Test; // Important!
...

And this is the important change. If you oversee this new import you will run into NullPointerExceptions.

The reason for this issue is that Mockito doesn’t automatically create the ModelServiceMock object when you still use import org.junit.Test. This is because the annotation @InjectMocks is used in conjunction with JUnit 5 and Mockito Jupiter. So if you still use any of the usual JUnit 5 annotations like @BeforeEach or the @Test annotation of JUnit 5 you will have a mix between JUnit 4 and JUnit 5, which can lead to problems.

Also note: In this example that also the annotation @Before has changed to @BeforeEach. Mockito depends on this new annotation too and will not call the when call if the setup method is not annotated with @BeforeEach !

Sonatype – 401 Content access is protected by token

Today I run into a maven problem during deployment of my snapshot releases to https://oss.sonatype.org. The upload was canceled with a message like this one:

[ERROR] Failed to execute goal org.sonatype.plugins:nexus-staging-maven-plugin:1.6.13:deploy (injected-nexus-deploy) on project imixs-workflow-index-solr: Failed to deploy artifacts: Could not transfer artifact org.imixs.workflow:imixs-workflow:pom:6.0.7-20240619.183701-1 from/to ossrh (https://oss.sonatype.org/content/repositories/snapshots): authentication failed for https://oss.sonatype.org/content/repositories/snapshots/org/imixs/workflow/imixs-workflow/6.0.7-SNAPSHOT/imixs-workflow-6.0.7-20240619.183701-1.pom, status: 401 Content access is protected by token -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <args> -rf :imixs-workflow-index-solr

This may happen if you have overlooked the fact that Sonatype has introduced a new token-based authentication method.

Update your maven settings.xml file

What you need first, is to remove your hard coded userid/password from your maven settings.xml file (located in your home directory .m2/)

Your server config for ossrh should look like this:

<settings>
   <servers>
    <server>
     <id>ossrh</id>
     <username>token-username</username>
     <password>token-password</password>
    </server>
   </servers>
</settings>

For this you need to generate a token first. If you still use your plaintext userid/password this will no longer work. Find details here.

  1. Login to https://oss.sonatype.org with your normal user account
  2. Select under your login name the menu option “Profile”
  3. Click on the ‘Profile’ tab
  4. Generate a new access token

This will show you the token to be replaced with your old userid/password in your settings.xml file

That’s it. Now your deployment should work again.

How to Change the Color of Your BASH Prompt

On Linux servers you sometimes have to switch to the superuser (su). The user has privileged rights and thing can got mad if you are not aware if you are currently working as a ‘normal’ user or a superuser. To make this situations more obvious in a Linux shell, you can add colors to your BASH Prompt.

You simply have to edit the file ~/.bashrc on Debian systems. For a normal user add this code block:

# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt
force_color_prompt=yes
if [ -n "$force_color_prompt" ]; then
    if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
        # We have color support; assume it's compliant with Ecma-48
        # (ISO/IEC-6429). (Lack of such support is extremely rare, and such
        # a case would tend to support setf rather than setaf.)
        color_prompt=yes
    else
        color_prompt=
    fi
fi

if [ "$color_prompt" = yes ]; then
    PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u\[\033[01;34m\]@\[\033[01;36m\]\h\[\033[01;33m\]\w\[\033[01;35m\]\$ \[\033[00m\]'
else
    PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
fi
unset color_prompt force_color_prompt

And for the root user (/root/.bashrc) change the color settings like this:

# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt
force_color_prompt=yes

if [ -n "$force_color_prompt" ]; then
    if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
        # We have color support; assume it's compliant with Ecma-48
        # (ISO/IEC-6429). (Lack of such support is extremely rare, and such
        # a case would tend to support setf rather than setaf.)
        color_prompt=yes
    else
        color_prompt=
    fi
fi

if [ "$color_prompt" = yes ]; then
    PS1='${debian_chroot:+($debian_chroot)}\[\033[01;31m\]\u\[\033[01;34m\]@\[\033[01;36m\]\h\[\033[01;33m\]\w\[\033[01;35m\]\$ \[\033[00m\]'

else
    PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
fi
unset color_prompt force_color_prompt

That’s it. Now you have a red marker if you are logged in as a superuser and a green marker if you are working as a normal user:

Normal User:

Root:

How to Run LLMs in a Docker Container

LLM stands for Large Language Model and is a large-scale AI model that has been trained with an extensive amount of text and code. Beside the well known and widespread Chat GPT, today there are many powerful Open Source alternatives available. The advantage of an Open Source LLM is that you can use such a model in your own application within your own environment. There is no dependency on an external service provider that can raise prices, shut down services, or remove models.

But the question that inevitably arises is: Where to start? At least that’s the question I asked myself. After some research I found out that it isn’t such difficulty as it sounds to run a local LLM.

First of all there is a place called Hugging Face providing a kind of market place for all kinds of AI models. After you have registers yourself on the page you can search and download all kinds of different Models. Of course each model is different and addresses different needs and requirements. But the good news is that there is a kind of common open standard to run a LLM called LLaMA CCP. Lamma CCP allows you to run a LLM with minimal setup and state-of-the-art performance on a wide variety of hardware – locally and in the cloud. And of course there is also a Python binding available. And this makes is easy to test a LLM in a Docker container.

Continue reading “How to Run LLMs in a Docker Container”

My Git – Cheat Sheet

This is just a short collection of Git commands and tricks which I personally did not always remember.

Create a new Tag

To create and push a new tag:

1.) List current tags

$ git tag

2.) Create a new Tag

$ git tag -a <TAG-VERSION> -m "next release" 

3.) Push tag

By default, the git push command doesn’t transfer tags to remote servers. You will have to explicitly push tags to a shared server after you have created them.

$ git push origin <TAG-VERSION>

Create a Branch

To list all existing branches:

$ git branch

to create a new local branch

$ git branch <branch>

and checkout it with

$ git checkout <branch>

to push the branch to the remote repo

$ git push origin <branch>

Merge a Branch

Merge another branch into the current (e.g. into the master branch)

List all the branches in your local Git repository using the git branch command:

$ git branch

The output shows all branches and marks the current branch with an *.

Ensure you are on the branch you want to merge into. To switch to the master branch:

$ git checkout master

Now you can start merging. Since merging is a type of commit, it also requires a commit message.

$ git merge -m "Your merge commit message" [source_branch]

Check the result in your current file tree.

Finally push your changes:

$ git push origin

How to resolve Merge Conflicts

Sometime you may forget to pull before you start working on something. Later you can not push your commit directly if a colleague has worked on some other artifacts. In this case you need to ‘stage’ your local changes first, pull the changes from your colleague and than you can push your own changes back into the remote repo:

# Stage all local changes 
git add .
git commit -am "commit message"
# Pull all changes from colleague and rebase your last commit on top of the upstream 
git pull origin --rebase
# Push all together back into the remote repo
git push


Install Open JDK 11 on Debian 12 (Bookworm)

In Debian 12 the default JDK is Java 17. In case you need Java 11 instead you can follow this blog from Linux Shout .

Here is the short version:

1) Edit your sources.list

Edit the file /etc/apt/sources.list and add the unstable packages at the end of the file

deb http://deb.debian.org/debian unstable main non-free contrib

2) Next Update your apt preferences

Edit the file /etc/apt/preferences and add the following entry:

Package: *
Pin: release a=stable
Pin-Priority: 900

Package: *
Pin: release a=unstable
Pin-Priority: 50

This will make our Debian 12 system only choose the stable packages while updating instead of unstable ones.

3) Install JDK 11

Now you can install JDK 11 and switch the java version using the

$ sudo apt update
$ sudo apt install openjdk-11-jdk
$ sudo update-alternatives --config java

The last command allows you to switch between JDK 17 and JDK 11.

How to Use Flameshot in Debian 12 (bookworm)

Flameshot is a nice screen capture tool allowing you to mark a screenshot with lines and text and save the screenshot or copy it into the clipboard.

I uses this tool since years. But on Debian 12 it seems not to work. At least it does not open on my installation.

The trick is to start the program form a terminal window with the option gui

$ flameshot gui

How to Handle JSF Exceptions in Jakarta EE 10

Exception handling is a tedious but necessary job during development of modern web applications. And it’s teh same for Jakarta EE 10. But if you migrate an existing application to the new Jakarta EE 10 things have change a little bit and so it can happen that you old errorHandler does no no longer work. At least this was the case when I migrated Imixs-Office-Workflow to Jakrata EE 10. So in this short tutorial I will briefly explain how to handle JSF Exceptions.

First of all you need an exeptionHandler extending the Jakarta EE10 ExceptionHandlerWrapper class. The implementation can look like this:


import java.util.Iterator;
import java.util.Objects;

import jakarta.faces.FacesException;
import jakarta.faces.application.NavigationHandler;
import jakarta.faces.context.ExceptionHandler;
import jakarta.faces.context.ExceptionHandlerWrapper;
import jakarta.faces.context.FacesContext;
import jakarta.faces.context.Flash;
import jakarta.faces.event.ExceptionQueuedEvent;
import jakarta.faces.event.ExceptionQueuedEventContext;

public class MyExceptionHandler extends ExceptionHandlerWrapper {

  public MyExceptionHandler(ExceptionHandler wrapped) {
    super(wrapped);
  }

  @Override
  public void handle() throws FacesException {
    Iterator iterator = getUnhandledExceptionQueuedEvents().iterator();

    while (iterator.hasNext()) {
      ExceptionQueuedEvent event = (ExceptionQueuedEvent) iterator.next();
      ExceptionQueuedEventContext context = (ExceptionQueuedEventContext) event.getSource();

      Throwable throwable = context.getException();

      throwable = findCauseUsingPlainJava(throwable);

      FacesContext fc = FacesContext.getCurrentInstance();

      try {
        Flash flash = fc.getExternalContext().getFlash();
        flash.put("message", throwable.getMessage());
        flash.put("type", throwable.getClass().getSimpleName());
        flash.put("exception", throwable.getClass().getName());

        NavigationHandler navigationHandler = fc.getApplication().getNavigationHandler();

        navigationHandler.handleNavigation(fc, null, "/errorhandler.xhtml?faces-redirect=true");

        fc.renderResponse();
      } finally {
        iterator.remove();
      }
    }

    // Let the parent handle the rest
    getWrapped().handle();
  }

  /**
   * Helper method to find the exception root cause.
   * 
   * See: https://www.baeldung.com/java-exception-root-cause
   */
  public static Throwable findCauseUsingPlainJava(Throwable throwable) {
    Objects.requireNonNull(throwable);
    Throwable rootCause = throwable;
    while (rootCause.getCause() != null && rootCause.getCause() != rootCause) {
      System.out.println("cause: " + rootCause.getCause().getMessage());
      rootCause = rootCause.getCause();
    }
    return rootCause;
  }

}

This wrapper overwrites the default ExceptionHandlerWrapper. In the method handle() (which is the imprtant one) we search the root cause of the exception and put some meta information into the JSF flash scope. The flash is a memory that can be used by the JSF page we redirect to – ‘errorhandler.xhtml’

Next you need to create a custom ExceptionHanlderFactor. This class simple registers our new ExceptionHandler:


import jakarta.faces.context.ExceptionHandler;
import jakarta.faces.context.ExceptionHandlerFactory;

public class MyExceptionHandlerFactory extends ExceptionHandlerFactory {
    public MyExceptionHandlerFactory(ExceptionHandlerFactory wrapped) {
        super(wrapped);
    }

    @Override
    public ExceptionHandler getExceptionHandler() {
        ExceptionHandler parentHandler = getWrapped().getExceptionHandler();
        return new MyExceptionHandler(parentHandler);
    }

}

The new Factory method need to be registered in the faces-config.xml file:

 .... 
    <factory>
      <exception-handler-factory>
        org.foo.MyExceptionHandlerFactory
      </exception-handler-factory>
    </factory>
 ....

And finally we can create a errorhandler.xhtml page that displays a user friendly error message. We can access the flash memory here to display the meta data collected in our ErrorHandler.

<ui:composition xmlns="http://www.w3.org/1999/xhtml"
	xmlns:c="http://xmlns.jcp.org/jsp/jstl/core"
	xmlns:f="http://xmlns.jcp.org/jsf/core"
	xmlns:h="http://xmlns.jcp.org/jsf/html"
	xmlns:ui="http://xmlns.jcp.org/jsf/facelets"
	template="/layout/template.xhtml">
	<!-- 
		Display a error message depending on the cause of a exception
	 -->
	<ui:define name="content">
	  <h:panelGroup styleClass="" layout="block">
				<p><h4>#{flash.keep.type}: #{flash.keep.message}</h4>
		<br />
		<strong>Exception:</strong>#{flash.keep.exception}
		<br />
		<strong>Error Code:</strong>
		<br />
		<strong>URI:</strong>#{flash.keep.uri}
		</p>
		<h:outputText value="#{session.lastAccessedTime}">
			<f:convertDateTime pattern="#{message.dateTimePatternLong}" timeZone="#{message.timeZone}"
							type="date" />
		</h:outputText>
	<h:form>
	  <h:commandButton action="home" value="Close"
		immediate="true" />			
	</h:form>
  </h:panelGroup>
  </ui:define>

</ui:composition>

That’s it. You can extend and customize this to you own needs.

Find And Replace in ODF Documents

With the ODF Toolkit you got a lightweight Java Library to create, search and manipulate Office Document in the Open Document Format. The following tutorial will show some examples to find and replace parts of text and spreadsheet documents.

Maven

You can add the ODF Toolkit to your Java project with the following Maven dependency:

	<dependency>
		<groupId>org.odftoolkit</groupId>
		<artifactId>odfdom-java</artifactId>
		<version>0.12.0-SNAPSHOT</version>
	</dependency>

Note: Since version 0.12.0 new methods where added which I will explain in the following examples.

Text Documents

To find and replace parts of ODF text document you can use the class TextNavigation. The class allows you to search with regular expression in a text document and navigate through the content.

The following example show how to find all text containing the names ‘John’ or ‘Marry’ and replace the text selection with ‘user’:

OdfTextDocument odt = (OdfTextDocument) OdfDocument.loadDocument(inputStream);
TextNavigation textNav;

textNav = new TextNavigation("John|Marry", odt);
while (textNav.hasNext()) {
	TextSelection selection = textNav.next();
	logger.info("Found " + selection.getText() + 
                    " at Position=" + selection.getIndex());
	selection.replaceWith("User");
}

It is also possible to change the style of a selection during iterating through a document. See the following example:

    OdfStyle styleBold = new OdfStyle(contentDOM);
    styleBold.setProperty(StyleTextPropertiesElement.FontWeight, "bold");
    styleBold.setStyleFamilyAttribute("text");
    // bold all occurrences of "Open Document Format"
    TextNavigation search = new TextNavigation("Open Document Format", doc);
    while (search.hasNext()) {
       TextSelection selection = search.next();
       selection.applyStyle(styleBold);
    }

SpreadSheet Documents

To find and manipulate cells in a SpreadSheet document is also very easy. In case of a .ods document you can find a cell by its coordinates:

InputStream inputStream = getClass().getResourceAsStream("/test-document.ods");
OdfSpreadsheetDocument ods = (OdfSpreadsheetDocument) OdfDocument.loadDocument(inputStream);

OdfTable tbl = ods.getTableByName("Table1");
OdfTableCell cell = tbl.getCellByPosition("B3");
// set a new value
cell.setDoubleValue(100.0);

There are much more methods in the ODS Toolkit. Try it out and join the community.