Cassandra and Docker-Swarm

Running a Apache Cassandra Cluster with Docker-Swarm is quite easy using the official Docker Image. Docker-Swarm allows you to setup several docker worker nodes running on different hardware or virtual servers. Take a look at my example docker-compose.yml file:

version: "3.2"

networks:
  cluster_net:
    external:
      name: cassandra-net  
  
services:  

  ################################################################
  # The Casandra cluster 
  #   - cassandra-node1
  ################################################################        
  cassandra-001:
    image: cassandra:3.11
    environment:
      CASSANDRA_BROADCAST_ADDRESS: "cassandra-001"
    deploy:
      restart_policy:
        condition: on-failure
        max_attempts: 3
        window: 120s
      placement:
        constraints:
          - node.hostname == node-001
    volumes:
        - /mnt/cassandra:/var/lib/cassandra 
    networks:
      - cluster_net

  ################################################################
  # The Casandra cluster 
  #   - cassandra-node2
  ################################################################        
  cassandra-002:
    image: cassandra:3.11
    environment:
      CASSANDRA_BROADCAST_ADDRESS: "cassandra-002"
      CASSANDRA_SEEDS: "cassandra-001"
    deploy:
      restart_policy:
        condition: on-failure
        max_attempts: 3
        window: 120s
      placement:
        constraints:
          - node.hostname == node-002
    volumes:
        - /mnt/cassandra:/var/lib/cassandra 
    networks:
      - cluster_net

I am running each cassandra service on a specific host within my docker-swarm. We can not use the build-in scaling feature of docker-swarm because we need to define a separate data volume for each service. See the section ‘volumes’.

The other important part are the two environment variables ‘CASSANDRA_BROADCAST_ADDRESS’ and ‘CASSANDRA_SEEDS’.

‘CASSANDRA_BROADCAST_ADDRESS’ defines a container name for each cassandra node within the cassandra cluster. This name matches the service name. As both services run in the same network ‘cluster_net’ the both cassandara nodes find each user via the service name.

The second environment ‘CASSANDRA_SEEDS’ defines the seed node which need to be defined for the second service only. This is necessary even if a cassandra cluster is ‘master-less’.

That’s is!

Mailspring – an Alternative for Thunderbird

The new Open Source E-Mail Client Mailspring is possible an alternative for your Thunderbird. I am running Linux Debian and there are not so much different mail clients available. Mailspring seems to become more and more interesting.

How to Install Mailspring on Linux Debian

To install mailspring on Linux Deiban first download the latest .deb package from the Download page. To install the client run:

sudo dpkg -i mailspring-*-amd64.deb 

Apache Cassandra and Java EE

In this Blog I will show you how we use Apache Cassandra in our Open Source Project Imixs-Archive. Imixs-Archive is a service which we use in Imixs-Workflow to push business data into a Cassandra Cluster. The service provides a Rest API based on JAX-RS and uses the DataStax Driver to write the data into the Cassandra Cluster.

The problem is that for a connection you need first to setup a Cluster Object and connect to your keyspace to get a Session object. This is time consuming and slows down the rest service call if you do this during the request. But within Java EE you can solve this problem easily

Continue reading “Apache Cassandra and Java EE”

Microprofile CustomConfigSource with Database

With the new Microprofile-Config API there is a new and easy way to deal with configuration properties in an application. The Microprofile-Config API allows you to access config and property values form different sources like:

  • System.getProperties() (ordinal=400)
  • System.getenv() (ordinal=300)
  • all META-INF/microprofile-config.properties files

You can find a good introduction into the Microprofile Config API here. And of course your can also implement your own config source. But most of the examples are based on reading custom config values from am existing file, like in the example here. Now in this Blog I will show how you can implement a Micorprofile ConfigSource based on values read from a Database.

Continue reading “Microprofile CustomConfigSource with Database”

How to configure Security in Open Liberty Application Server?

I started to run our Imixs-Workflow engine on OpenLiberty Application Server. One important thing in Imixs-Workflow is the authentication against the workflow engine. In Open Libertry, security can be configured in the server.xml file. But it takes me some time to figure out the correct configuration of the role mapping in combination with the @RunAs annotation which we use in our service EJBs.

Continue reading “How to configure Security in Open Liberty Application Server?”

Microprofile – Metric API: How Create a Metric Programatically

The Microprofile Metric API is a great way to extend a Microservice with custom metrics. There are a lot of examples how you can add different metrics by annotations.

@Counted(name = "my_coutner", absolute = true, 
tags={"category=ABC"})
public void countMeA() {
...
}

@Counted(name = "my_coutner", absolute = true,
tags={"category=DEF"})
public void countMeB() {
...
}

In this example I use tags to specify variants of the same metric name. With MP Metrics 2.0, tags are used in combination with the metric’s name to create the identity of the metric. So by using different tags for the same metric name will result in an individual metric output for each identity.

But in case you want to create a custom metric with custom tags you can not use annotations. This is typically the case if the tags are computed by application data at runtime. The following example shows how you can create a metric with tags programatically:

...    
@Inject
@RegistryType(type=MetricRegistry.Type.APPLICATION)
MetricRegistry metricRegistry;
...
Metadata m = Metadata.builder()
.withName("my_metric")
.withDescription("my description")
.withType(MetricType.COUNTER)
.build();
Tag[] tags = {new Tag("type","ABC")};
Counter counter = metricRegistry.counter(m, tags);
counter.inc();
...

Eclipse – UTF-8 Encoding

When developing a web application encoding is an important issue. Usually UTF-8 encoding should be the best choice if you need to implement multi-lingual web application.

However in Eclipse there is a default setting of ISO-8859-1 encoding for so called ‘Java Property Files’.

If you try to build a Maven project where UTF-8 is typically used the text in resource bundles will be broken. To change these settings you need to explicit set the default encoding to ‘UTF-8’ and press the ‘Update’ button right to this input field:

I recommend to set the default encoding for Java Properties Files UTF-8.

Wildfly – Broken pipe (Write failed)

We at Imixs are running JBoss/Wildfly as an application server for our Web or Rest Services. Today I stumbled into a problem when I tried to post large data via a Jax-rs POST request. The exception I saw on the server log was at first not very meaning full:

imixsarchiveservice_1  | 18:50:21,531 INFO  [org.apache.http.impl.execchain.RetryExec] (EJB default - 2) I/O exception (java.net.SocketException) caught when processing request to {}->http://imixsofficeworkflow:8080: Broken pipe (Write failed)
imixsarchiveservice_1 | 18:50:21,531 INFO [org.apache.http.impl.execchain.RetryExec] (EJB default - 2) Retrying request to {}->http://imixsofficeworkflow:8080
imixsarchiveservice_1 | 18:50:21,537 INFO [org.apache.http.impl.execchain.RetryExec] (EJB default - 2) I/O exception (java.net.SocketException) caught when processing request to {}->http://imixsofficeworkflow:8080: Broken pipe (Write failed)
imixsarchiveservice_1 | 18:50:21,537 INFO [org.apache.http.impl.execchain.RetryExec] (EJB default - 2) Retrying request to {}->http://imixsofficeworkflow:8080
imixsarchiveservice_1 | 18:50:21,545 INFO [org.apache.http.impl.execchain.RetryExec] (EJB default - 2) I/O exception (java.net.SocketException) caught when processing request to {}->http://imixsofficeworkflow:8080: Broken pipe (Write failed)
imixsarchiveservice_1 | 18:50:21,545 INFO [org.apache.http.impl.execchain.RetryExec] (EJB default - 2) Retrying request to {}->http://imixsofficeworkflow:8080

My corresponding jax-rs implementation was quite simple:

@POST
@Produces(MediaType.APPLICATION_XML)
@Consumes({ MediaType.APPLICATION_XML, MediaType.TEXT_XML })
public Response postSnapshot(XMLDocument xmlworkitem) {
    try {
        ...
    } catch (Exception e) {
        e.printStackTrace();
        return Response.status(Response.Status.NOT_ACCEPTABLE).build();
    }
}

The data object ‘xmlworkitem’ in my case is a XML Stream containing more than 24mb of data.

The Problem

After some research I found that the reason for my problem was the default http server configured in Wildfly standalone.xml. The default configuration looks like this:

<server name=”default-server”>
<http-listener name=”default” socket-binding=”http” redirect-socket=”https” enable-http2=”true”/>
…..
</server>

What you can’t see is that there is a a default max-post-size of 25485760 (25mb). But of course you can overwrite this setting by adding the attribute max-post-size with a maximum of e.g. 100mb:

<server name=”default-server”>
<http-listener name=”default” max-post-size=”104857600″ socket-binding=”http” redirect-socket=”https” enable-http2=”true”/>
…..
</server>

Now my Rest Interface works as expected and accepts also large amount of Data.

Note: I do not use the Wildfly CLI tool to configure the server because I run my server as a docker container with a pre configured standalone.xml file. But you can see the solution also in the Wilfly Forum.