ManagedScheduledExecutorService vs EJB Timer

Over the past years I always used EJB Timer Service to implement scheduled tasks in my Java Enterprise applications. Since Java EE7 the ManagedScheduledExecutorService is a new pattern to implement a scheduler service. The ManagedScheduledExecutorService is part of the SE ScheduledExecutorService and provides methods for submitting delayed or periodic tasks for execution.

Implementing a ManagedScheduledExecutorService is quite simple. See the following example:

@Startup
@Singleton
@LocalBean
public class MyScheduler {
  
    @Resource
    ManagedScheduledExecutorService scheduler;    

    @Inject
    MyService myService; 

    @PostConstruct
    public void init() {
        this.scheduler.scheduleAtFixedRate(this::run, 500, 500,
          TimeUnit.MILLISECONDS);
    }    

    public void run() {
        myService.processSomething();
    }
}

In compare to a EJB Timer it seems to be quite simple to use this pattern. But the ManagedScheduledExecutorService is more a lightweight scheduling framework and it does not support features like transaction support, full lifecycle operations (create, read, cancel timers) which are supported by EJB Timers. In addition EJB Timers can be persisted and so survive server crash and restart. And in fact I personally run into a problem with execution exceptions during a redeployment scenario in Wildfly a few days ago. So is a EJB Timer an outdated technology just because it’s an EJB?

The Advantage and Restrictions of EJB Timers

In the early beginning of my Java EE career I learned that EJB timers are persisted an managed by the ejb container on the application server level. This ensures that the timer is executed correctly without conflicts in scenarios with multiple threads. This means even in a clustered environment, a persistent EJB timer runs only in one cluster member which might not necessarily be the same cluster member it was created in. Since we are today mostly talking about horizontally scalable applications spread across multiple servers, this seems to be a restriction. And this was also my first thought when I switched from EJB Timer to ManagedScheduledExecutorService.

But on the other hand, that’s the common expectation for a timer at a specific point to fire only at one of the nodes in order to avoid duplication. For example, you might probably do not want to send out meeting notices twice from different nodes. So the idea that a persisted EJB Timer runs only in one instance even in a large cluster environment can be an important feature and not a restriction.

Non-Persistent EJB Timers

Since EJB 3.1 specification there is a variant of non-persistent EJB Timers. Non-persistent timers have similar semantics and behaviour as the origin persistent timers, but without the overhead of a data store. This means they have a different life cycle and are easier to use than persistent timers. Non-persistent timers are active only while the application server is active and are not maintained across application server crashes, shutdowns and restarts. But in difference to the ManagedScheduledExecutorService the non-persistent EJB Timer is transactional during the creation and cancellation which can be important for many scenarios. If a timer is created within a transaction and that transaction is later rolled back, the creation of the timer is rolled back as well. Similar rules apply to the cancellation of a timer.

This is an example how a EJB Timer can be implemented:

@Singleton
public class MyTimerService {
    @EJB
    MyService myService;
  
    @Schedule(second="*/1", minute="*",hour="*", persistent=false)
    public void doWork(){
        myService.processSomething();
    }
}

In a clustered environment a non-persistent timer runs in each cluster member that it was created in. And a automatic non-persistent timers run in each cluster member that contains the EJB. So this means the non-persistent EJB Timer scales horizontal within a clustered environment – e.g. a Kubernetes cluster. More details about the EJB Timer variants can be found here.

Conclusion

So we have seen how ManagedScheduledExecutorService and EJB Timers can be used to implement scheduled tasks in Jakarta EE. In my personal opinion you should use EJB timers if you are running on a Jakarta EE stack. The EJB Timer provides you with more features and is even scalable as the more lightweight ManagedScheduledExecutorService. This is just my personal opinion. Choose the technology that best fits your app.

Kubernetes – Force to Delete volumeattachments

In kubernetes it may happen that you can not get rid of a volumeattachment in deletion status. This situation is strange and should not happen but it happens. In this case can solve this undesired status as followed:

Frist list all existing volumeattachments:

$ kubectl get volumeattachment

copy the name of the volumeattachment causing the problem. Next you can try to delete it with:

$ kubectl delete volumeattachment csi-3a184154fxxxxxxxxxxxxxxxxxxxxx

Test if the volumeattachment was delete successfully.

If not, you can force the deletion by editing the volumeattachment resource itself – run:

$ kubectl edit volumeattachment csi-3a184154fxxxxxxxxxxxxxxxxxxxxx

Search for a section starting with

  finalizers:
  - external-attacher/.....

And delete this lines so that no more ‘finalizers’ section exists. Save and close the editor (in VI this is the command ‘wq!’

Now your volumeattachment should be deleted.

If anybody knows a better solution let me know 😉

Microprofile OpenAPI and Swagger UI

With the Eclipse-Microprofile framework you can develop microservices quite easy. One of the build-in functionalities is the support for the OpenAPI standard. This means your REST services will automatically exposed in a OpenAPI format. For example on a Payara-Micro Server a rest service resource /api/training/ may look like this:

openapi: 3.0.0
info:
  title: Deployed Resources
  version: 1.0.0
servers:
- url: http://localhost:8080
paths:
  /api/training:
    post:
      operationId: getSomeData
      requestBody:
        content:
          application/xml:
            schema:
              $ref: '#/components/schemas/XMLConfig'
      responses:
        default:
          description: Default Response.
          content:
            application/xml:
              schema:
                type: object
components:
  schemas:
    XMLConfig:
     ....

You can request the OpenAPI resource form your server running your REST service:

http://localhost:8080/openapi

Swagger UI

The Swagger UI is a web interface which can be used to interact with your REST API providing the OpenAPI standard. This is a nice feature, with is for example a build-in functionality from OpenLiberty. But also on other Microprofile Servers like Wildfly or Payara you can add the Swagger UI easily. Just add the following maven dependency into your microservice:

   ....
   <dependency>
      <groupId>org.microprofile-ext.openapi-ext</groupId>
      <artifactId>openapi-ui</artifactId>
      <version>1.1.3</version>
   </dependency>
   ...

This will automatically activate the swagger web UI. To access the UI from your web browser just open the resource /openapi-ui/

http://localhost:8080/api/openapi-ui/

Docker

If you are running your service in Docker, which is likely to be the case in most projects, you will need to overwrite the Docker internal host name. Within Eclipse Microprofile, this is very easy via the config API. You simply need to set the following environment variable:

version: "3.3"
services:
 
  my-service:
    image: ....
    environment:
      MP_OPENAPI_SERVERS: "http://localhost:8080"
    ....

You can also add multiple server instances by seperating with comma. You will find a complete a list of configurable items for the OpenAPI in the MicroProfile OpenAPI Specification.

Ceph Octopus running on Debian Buster

In my previous blog I explained how to run the Ceph Storage System on Debian 9. In the mean time the new version 15 (Octopus) was released. And this version not only runs on Debian 10 (Buster) it also provides a complete new install process. In the previous release of ceph you had to run the command line tool ‘ceph-deploy’. This tool was not so easy to manage and there was a lot of work to get ceph running.

With the new Octopus release there is a new admin tool called cephadm. This tool is based on docker which means there is no need to install additional tools or libraries on your host. The only thing you need is a server running docker.

Continue reading “Ceph Octopus running on Debian Buster”

Debian Upgrade form Stretch to Buster

Today I updated my Debian system on my working mashine form Stretch to Buster. This is just a short reminder for myself. I followed the blog form nixcraft here.

1. Check version

First check your current version and note if you need it for fixing problems later.

$ lsb_release -a
 No LSB modules are available.
 Distributor ID: Debian
 Description:    Debian GNU/Linux 9.12 (stretch)
 Release:        9.12
 Codename:       stretch
$ uname -mrs
 Linux 4.9.0-12-amd64 x86_64

2. Update installed packages

First bring your strech up to date by updating all your installed packages

$ sudo apt update
$ sudo apt upgrade
$ sudo apt full-upgrade
$ sudo apt --purge autoremove

3. Update /etc/apt/sources.list file

Now you can update your debian package source list from stretch to buster. First check your current versions:

$ cat /etc/apt/sources.list

Now you need to replace the strech references with buster. This can be done quickly with the sed command:

$ sudo cp -v /etc/apt/sources.list /root/
$ sudo cp -rv /etc/apt/sources.list.d/ /root/
$ sudo sed -i 's/stretch/buster/g' /etc/apt/sources.list
$ sudo sed -i 's/stretch/buster/g' /etc/apt/sources.list.d/*

You can verify our new list

$ cat /etc/apt/sources.list 

Finally update you new package list

$ sudo apt update

4. Minimal system upgrade

In the next step you perform a minimal system update to avoid removing to much packages.

$ sudo apt upgrade

Answer question about restart of services wit ‘yes’

5. Upgrading Debain 9 to Debian 10

No you can run the real upgrade to Debain 10.

$ sudo apt full-upgrade

…this will take some time…

6. Reboot and Verification

Finally reboot your system…

$ sudo reboot

…and check the new version

$ uname -r
$ lsb_release -a

7. Clean up

Rund the autoremove command to get rid of old packages:

$ sudo apt --purge autoremove

Thats it! Have fun with fine tuning your new Debian Buster!

Build Your Own Kubernetes Cluster

Kubernetes is definitely the most widely used solution when it comes to container platforms. Everyone knows it by now and it is not only used successfully in large projects. But I think it is also true that many projects do not run Kubernetes themselves but use one of the major platform operators. But why is it like that? It is not always a good idea to give up control. And certainly not when it comes to your own data.

Continue reading “Build Your Own Kubernetes Cluster”

Traefik v2.2 and Kubernetes Ingress

Since Version 2 Traefik supports Kubernetes Ingress and acts as a Kubernetes Ingress controller. This is an alternative to the Traefik specific ingressRoute objects. With v2.2. you can now use plain Kubernetes Ingress Objects together with annotations. Of course you can still use IngressRoute objects if you need them for specific requirements.

I tested this feature within Kubernetes 1.17.3. In this blog post I want to point out the important parts of the configuration. Please note that I provide a details setup for Traefik running within a self managed Kubernetes cluster in my open source project Imixs-Cloud.

Continue reading “Traefik v2.2 and Kubernetes Ingress”

OpenLiberty – Performance

In the course of our open source project Imixs-Office-Workflow, I have now examined OpenLiberty in more detail. And I came up to the conclusion that OpenLiberty has a very impressive performance.

Docker

I run OpenLiberty in Docker in the version ‘20.0.0.3-full-java8-openj9-ubi’. Our application is a full featured Workflow Management Suite with a Web Interface and also a Rest API. So for OpenLiberty we use the following feature set:

...
	<featureManager>
		<feature>javaee-8.0</feature>
		<feature>microProfile-2.2</feature>>
		<feature>javaMail-1.6</feature>
	</featureManager>
...

As recommended by OpenLiberty I use the following Dockerfile layout:

FROM openliberty/open-liberty:20.0.0.3-full-java8-openj9-ubi
# Copy postgres JDBC driver
COPY ./postgresql-9.4.1212.jar /opt/ol/wlp/lib
# Add config
COPY --chown=1001:0 ./server.xml /config/server.xml

# Activate Debug Mode...
# COPY --chown=1001:0 ./jvm.options /config/

# Copy sample application
COPY ./imixs-office-workflow*.war /config/dropins/

RUN configure.sh

The important part here is the RUN command at the end of the Dockerfile. This script adds the requested XML snippets and grow image to be fit-for-purpose. This makes the docker build process a little bit slower, but the startup of the image is very fast.

I measured a startup time of round about 12 seconds. This is very fast for the size and complexity of this application. And it is a little bit faster than the startup of Wildfly with round about 15 seconds. Only in case of a hot-redeploy of the application Wildfly seems to be a little bit faster (6 seconds) in compare to OpenLiberty (8 seconds).

Open LibertyWildfly
Docker Startup Time12 sec15 sec
Application Hot Deploy8 sec6 sec

Debug Mode

Note: activating the debug port makes OpenLiberty performance very poor. So do not forget to deactivate debugging in productive mode! The debug mode can be activated by providing a jvm.options file like this:

-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=7777

I have commented on this in the Dockerfile example above.

OpenLiberty and Hot-Deployment

The OpenSource application server OpenLiberty from IBM is very suitable for running microservices and web applications in production. But also for development, the server offers a very good support of autodeploy and hotdeployment.

Per default you can simply drop a new .war file into the folder /config/dropins/ and OpenLiberty will immediately deploy your application. You can configure the behavior of dropins in detail in the server.xml file.

For example, if you add the following tag into your server.xml file:

...
 <applicationManager autoExpand="true" />
....

then your application will be automatically expanded into a new folder at

${server.config.dir}/apps/expanded/APP_NAME/

Now when you deploy your application you will have a file layout like this:

./server.xml
./dropins/myapplication.war
./apps/expanded/myapplication.war/my-page.jsf
./apps/expanded/myapplication.war/WEB-INF/classes/com/foo/SomeAppClass.class

In case you use autoexpand=true than the “active” set of files will be the files under the apps/expanded/ folder which you can then hot-update. This approach is useful if you want to deploy a single .war file and then make tweaks to it after you deploy it, such as in dev mode.

javax.faces.PROJECT_STAGE

Note that the hot-deployment for JSF files is only working if your PROJECT_STAGE param is set to ‘development’. So if not yet activated add the following into your web.xml file:

<context-param>
	<param-name>javax.faces.PROJECT_STAGE</param-name>
	<param-value>Development</param-value>
</context-param>

For production it is recommended to set the parameter to ‘Production’. In this mode JSF files will be cached by OpenLiberty internally.

Alternatively you can set the param ‘javax.faces.FACELETS_REFRESH_PERIOD’ to 1 which will also force OpenLiberty to scann for changed JSF files and class files:

<context-param>
 	<param-name>javax.faces.FACELETS_REFRESH_PERIOD</param-name>
    	<param-value>1</param-value>
</context-param>

Manik Hot-Deply Plugin

With the Eclipse Hot-Deploy Plugin ‘Manik’ you can enable autodeploy and hot-deploy easily for OpenLiberty.

If you use the Option ‘Explode Artifacts’ you can deploy the .war as a folder directly into the /config/dropins/ folder of your OpenLiberty installation. The Hotdeployment directory is than the .war/ sub directory after the first deployment. You can disable the ‘autoExpand’ feature of OpenLiberty in this case. See also the discussion here.