Running Payara on Docker in Debug Mode

The Payara project provide a well maintained docker image on Docker Hub. Since version 5.192 you can easily create a docker image which runs Payara in Debug mode. You need just to add the environment variable “PAYARA_ARGS”

FROM payara/server-full:5.192
...
ENV PAYARA_ARGS --debug

COPY my-example.war $DEPLOY_DIR

Or you can also set the environment in your docker-compose.yml file:

version: "3.6"
services:
....  
  my-server:
    image: payara/server-full:5.192
    environment:
      PAYARA_ARGS: "--debug"
    ports:
      - "8080:8080"
      - "4848:4848"
      - "8181:8181"
      - "9009:9009"
....

After that Payara starts in Debug-Mode and listens to port 9009.

Cassandra and Docker-Swarm

Running a Apache Cassandra Cluster with Docker-Swarm is quite easy using the official Docker Image. Docker-Swarm allows you to setup several docker worker nodes running on different hardware or virtual servers. Take a look at my example docker-compose.yml file:

version: "3.2"

networks:
  cluster_net:
    external:
      name: cassandra-net  
  
services:  

  ################################################################
  # The Casandra cluster 
  #   - cassandra-node1
  ################################################################        
  cassandra-001:
    image: cassandra:3.11
    environment:
      CASSANDRA_BROADCAST_ADDRESS: "cassandra-001"
    deploy:
      restart_policy:
        condition: on-failure
        max_attempts: 3
        window: 120s
      placement:
        constraints:
          - node.hostname == node-001
    volumes:
        - /mnt/cassandra:/var/lib/cassandra 
    networks:
      - cluster_net

  ################################################################
  # The Casandra cluster 
  #   - cassandra-node2
  ################################################################        
  cassandra-002:
    image: cassandra:3.11
    environment:
      CASSANDRA_BROADCAST_ADDRESS: "cassandra-002"
      CASSANDRA_SEEDS: "cassandra-001"
    deploy:
      restart_policy:
        condition: on-failure
        max_attempts: 3
        window: 120s
      placement:
        constraints:
          - node.hostname == node-002
    volumes:
        - /mnt/cassandra:/var/lib/cassandra 
    networks:
      - cluster_net

I am running each cassandra service on a specific host within my docker-swarm. We can not use the build-in scaling feature of docker-swarm because we need to define a separate data volume for each service. See the section ‘volumes’.

The other important part are the two environment variables ‘CASSANDRA_BROADCAST_ADDRESS’ and ‘CASSANDRA_SEEDS’.

‘CASSANDRA_BROADCAST_ADDRESS’ defines a container name for each cassandra node within the cassandra cluster. This name matches the service name. As both services run in the same network ‘cluster_net’ the both cassandara nodes find each user via the service name.

The second environment ‘CASSANDRA_SEEDS’ defines the seed node which need to be defined for the second service only. This is necessary even if a cassandra cluster is ‘master-less’.

That’s is!

Mailspring – an Alternative for Thunderbird

The new Open Source E-Mail Client Mailspring is possible an alternative for your Thunderbird. I am running Linux Debian and there are not so much different mail clients available. Mailspring seems to become more and more interesting.

How to Install Mailspring on Linux Debian

To install mailspring on Linux Deiban first download the latest .deb package from the Download page. To install the client run:

sudo dpkg -i mailspring-*-amd64.deb 

Apache Cassandra and Java EE

In this Blog I will show you how we use Apache Cassandra in our Open Source Project Imixs-Archive. Imixs-Archive is a service which we use in Imixs-Workflow to push business data into a Cassandra Cluster. The service provides a Rest API based on JAX-RS and uses the DataStax Driver to write the data into the Cassandra Cluster.

The problem is that for a connection you need first to setup a Cluster Object and connect to your keyspace to get a Session object. This is time consuming and slows down the rest service call if you do this during the request. But within Java EE you can solve this problem easily

Continue reading “Apache Cassandra and Java EE”

Microprofile CustomConfigSource with Database

With the new Microprofile-Config API there is a new and easy way to deal with configuration properties in an application. The Microprofile-Config API allows you to access config and property values form different sources like:

  • System.getProperties() (ordinal=400)
  • System.getenv() (ordinal=300)
  • all META-INF/microprofile-config.properties files

You can find a good introduction into the Microprofile Config API here. And of course your can also implement your own config source. But most of the examples are based on reading custom config values from am existing file, like in the example here. Now in this Blog I will show how you can implement a Micorprofile ConfigSource based on values read from a Database.

Continue reading “Microprofile CustomConfigSource with Database”

How to configure Security in Open Liberty Application Server?

I started to run our Imixs-Workflow engine on OpenLiberty Application Server. One important thing in Imixs-Workflow is the authentication against the workflow engine. In Open Libertry, security can be configured in the server.xml file. But it takes me some time to figure out the correct configuration of the role mapping in combination with the @RunAs annotation which we use in our service EJBs.

Continue reading “How to configure Security in Open Liberty Application Server?”

Microprofile – Metric API: How Create a Metric Programatically

The Microprofile Metric API is a great way to extend a Microservice with custom metrics. There are a lot of examples how you can add different metrics by annotations.

@Counted(name = "my_coutner", absolute = true, 
tags={"category=ABC"})
public void countMeA() {
...
}

@Counted(name = "my_coutner", absolute = true,
tags={"category=DEF"})
public void countMeB() {
...
}

In this example I use tags to specify variants of the same metric name. With MP Metrics 2.0, tags are used in combination with the metric’s name to create the identity of the metric. So by using different tags for the same metric name will result in an individual metric output for each identity.

But in case you want to create a custom metric with custom tags you can not use annotations. This is typically the case if the tags are computed by application data at runtime. The following example shows how you can create a metric with tags programatically:

...    
@Inject
@RegistryType(type=MetricRegistry.Type.APPLICATION)
MetricRegistry metricRegistry;
...
Metadata m = Metadata.builder()
.withName("my_metric")
.withDescription("my description")
.withType(MetricType.COUNTER)
.build();
Tag[] tags = {new Tag("type","ABC")};
Counter counter = metricRegistry.counter(m, tags);
counter.inc();
...

Eclipse – UTF-8 Encoding

When developing a web application encoding is an important issue. Usually UTF-8 encoding should be the best choice if you need to implement multi-lingual web application.

However in Eclipse there is a default setting of ISO-8859-1 encoding for so called ‘Java Property Files’.

If you try to build a Maven project where UTF-8 is typically used the text in resource bundles will be broken. To change these settings you need to explicit set the default encoding to ‘UTF-8’ and press the ‘Update’ button right to this input field:

I recommend to set the default encoding for Java Properties Files UTF-8.