JPA and @OneToMany – How to Optimize?

In this article I want to explain an issue concerning the JPA OneToMany relationship which I was faced with in one of my own projects. Working with object -orientated languages like java we often model relationships between object in various cases. One of such relationships is the OnToMany relationship. For example an object ‘Server’ may have a relationship to an object ‘Configuration’. To make the ‘Configuration’ object generic we can model the JPA object in Java like this:

@javax.persistence.Entity
public class Configuration implements java.io.Serializable {
 private static final long serialVersionUID = 1L;

 @Id
 @GeneratedValue
 private BigInteger id;

 public String itemValue;
 public String itemName;

 @SuppressWarnings("unused")
 private Configuration() {
 }

 public Configuration(String name, String value) {
 itemValue = value;
 itemName = name;
 }
}

Continue reading “JPA and @OneToMany – How to Optimize?”

Microservices, Verticals and Business Process Management?

If you think about modern architecture, you will possibly find that one of the best solution (for the moment) seems to be a microservice architecture. Microservices are a quite different approach as we followed in the past, where we often designed monolithic enterprise applications. In a monolithic application context a single software system is encapsulating the business-logic, the database-layer and the UI components. The Java EE architecture provides a perfect framework to build such kind of applications to be deployed and executed in a scalable and transactional application server environment.

microservices_verticals_bpm-00

But monolithic enterprise applications are sometimes difficult to maintain, even if only little changes need to be made to one of its components. This is one reason why the idea came up to split a monolithic application block into several microservices.

One of the question is, how to change the architecture from a monolith approach to a modern service based architecture? A good overview what this means can be read in Christians Posta’s Blog.

Vertical Services

At the first look it seems to be easy to separate business logic into isolated services. But often this approach ends with a closely spaced set of microservices, which are not really loose coupled. One of the reasons can be the database layer which survives as a monolithic block behind all the new microservices. This happens when we think that database objects are related to each other. And this is only a very realistic picture of our real business world. But this creates complex synchronization points between all of our services and teams and you have to coordinate all of the changes in the database layer. Often a database change caused by one service affects also other services.

microservices_verticals_bpm-01

One solution to solve this problem can be, dividing the functionality into cohesive “verticals”, which are not driven by technical or organizational aspects. Each vertical will have its own “business logic”, “database” and an optional “UI component”. With  this approach, we don’t need to re-deploy the entire monolithic business-service tier, if we made changes to a single database object or a functionality of one of these verticals. Ideally, a single team can own and operate on each vertical as well.

microservices_verticals_bpm-02

 

Usually, it is recommended to control the synchronization between the services by sending events. The approach behind this idea is the “Reactive Programming” style. The communication between the services is realized in an asynchronous way . So the business logic of one service layer did not depend on the result of another service layer. A service may or may not react on a specific event. And this is the idea of loose coupling.

But does this fit your business requirements?

The “Business Process Service Architecture”

One problem with decoupling the business logic into separate services is the fact, that there still exists an “Over-All-Business-Logic” behind all these services. This is known as the “Business Process.” If, in our example above, the Order-Service, Invoice-Service and Logistic-Service are implemented as separate building blocks, there is still the general bussiness process of the “Ordering-Management” in the background defining states and business rules. For example if a product is ordered by a customer (Order-Service) the product may not be shipped (Logistic-Service) before an invoice was sent to the customer (Invoice-Service). So it is not sufficient, if the Invoice-Service and the Logistic-Service react asynchronously on a new event, triggered by the Order-Service, without reflecting the business process.

What we can do now, is defining separate events indicating each phase in the Order-Management-Process. For example the Order-Service can send an “Order-Ready-For-Invoice-Event” to signal that the invoice need to be send. And a new “Order-Ready-For-Shipment-Event” can be triggered by the Invoice-Service to indicate that the invoice for the order was sent to the customer and the product can be shipped. But now we have again created the same problem as we had before with the common database. We couple our services via specific event types which are reflecting our business process. The business process is now spread across various services.

To avoid this effect of tightly coupled services we can separate the business process itself as a service. This means that we move the Over-All-Business-Logic out of our services and provide a separate new service layer reflecting only the business process.

Business-Process-Service-Architecture

I will call this a “Business-Process-Service-Architecture”. In this architecture style each service layer depends on the business-process-service.  Events are sent only between a vertical and the Business-Process-Service layer. Our Order-Service, Invoice-Service and Logistic-Service may or may not react on those process events. The advantage of this architecture is, that we now have one service which controls the ordering management process and is reflecting the state for each process instance. Each vertical can call the business-service layer to query the status of the Over-All-Business-Process and also use these workflow information for further processing. Also we can change our business process independent from our vertical service layers.

BPMN 2.0 and Workflow Engines

One of the most common technologies to describe a business process is the ‘Business Process Model and Notation’ – BPMN 2.0 standard. BPMN was initially designed to describe a business process without all the technical details of a software system. A BPMN diagram is easy to understand and a good starting point to talk about a business process with technician as also with management people. Beside the general description of a business process, a BPMN model can also be executed by a process or workflow engine. The Workflow Management System controls each task from the starting point until it is finished. So based on the model description the workflow engine controls the life-cycle of a business process. An example of a workflow engine which can be integrated into a Business Process Service Architecture is the Open Source project Imixs-Workflow.

Beside the general control of the business process, our new service can also collect any kind of meta information from our verticals. The service becomes the central information point in our microservice architecture. We can now change our business process model and integrate new verticals without affecting existing implementations. We have finally decoupled our services. This is one of the most important effects you can achieve with this architecture style.  In a future article I will show an example how to integrate a Business Process Service based on a RESTfull service interface.

Continue Reading: How to Design a Business Process Service Architecture

WildFly Undertow – How to Configure a Request Dump

If you need to debug the request headers send to Wildfly application server you can configure a Request-Dumper. There for change the standalone.xml file and add a filter-ref and filter configuration into the subsystem section of undertow. See the following example:

... 
<subsystem xmlns="urn:jboss:domain:undertow:2.0">
....
 <server name="default-server">
       ...
      <host name="default-host" alias="localhost">
          .....
          <filter-ref name="request-dumper"/>
      </host>
 </server>
....
<filters>
    .....
    <filter name="request-dumper" class-name="io.undertow.server.handlers.RequestDumpingHandler" module="io.undertow.core" />
</filters

This will print out all the request information send by a browser.

Wildfly and MVC 1.0 Ozark

I just started to build a new Java EE Web Application based on the new MVC 1.0 Framework which will become part of Java EE8. I tried to run the application on Wildfly 9.0.2.

Setup a Maven Project

To setup a MVC 1.0 Web Application with maven is tquite simple. I Just added the following dependencies into my pom.xm :

<!-- mvc-api -->
<dependency>
  <groupId>javax.mvc</groupId>
  <artifactId>javax.mvc-api</artifactId>
  <version>1.0-pr</version>
</dependency>
<dependency>
  <groupId>org.mvc-spec.ozark</groupId>
  <artifactId>ozark-resteasy</artifactId>
  <version>1.0.0-m03</version>
</dependency>

As ozark is based on jersey you need the jersey-container-servlet. As ozark is based on Java 8 it is also necessary to start Wildfly with JDK 8. Otherwise the war file can not be deployed.

The application class

Next I added an application class with a resurce mapping /api/

@ApplicationPath("api")
public class MyApplication extends Application {
}

This class is necessary to call the application URL like

/myapp/api/hello?name=xx

Here is my controller class:

@Path("hello")
public class HelloController {
 @Inject 
 private Greeting greeting; 
 private static Logger logger = Logger.getLogger(HelloController.class.getName());

 @GET
 @Controller
 @View("/pages/hello.xhtml")
 public void hello(@QueryParam("name") String name) {
 logger.info("set new message: "+name);
 greeting.setMessage("Hello "+name);
 }
}

On the first look the conroller is triggered. But currently I am not able to return the target page (in my example /pages/hello.xhtml)

The reason seems to be that the jersey servlet container will not replace the resteasy implementation for the application.

Maybe someone knows how to solve this problem?


Update 15.Aug. 2017

Finally I solved the deployment issues in Wildfly. To get Ozark running together with Wildfly first you need to add the Jersey Dependencies to your project. So the pom.xml looks like this:

 ....
<dependencies>
 <!-- JEE Dependencies -->
 <dependency>
 <groupId>javax</groupId>
   <artifactId>javaee-api</artifactId>
   <version>7.0</version>
   <scope>provided</scope>
 </dependency>
 <!-- MVC Dependencies -->
 <dependency>
   <groupId>javax.mvc</groupId>
   <artifactId>javax.mvc-api</artifactId>
   <version>1.0-edr2</version>
 </dependency>
 <dependency>
   <groupId>org.glassfish.ozark</groupId>
   <artifactId>ozark</artifactId>
   <version>1.0.0-m02</version>
 </dependency>
 <dependency>
   <groupId>org.glassfish.jersey.containers</groupId>
   <artifactId>jersey-container-servlet</artifactId>
   <version>2.23.1</version>
 </dependency>
 <dependency>
   <groupId>org.glassfish.jersey.ext.cdi</groupId>
   <artifactId>jersey-cdi1x</artifactId>
   <version>2.23.1</version>
 </dependency>
 <dependency>
 <groupId>org.glassfish.jersey.ext</groupId>
 <artifactId>jersey-bean-validation</artifactId>
 <version>2.23.1</version>
   <exclusions>
     <exclusion>
      <groupId>org.hibernate</groupId>
      <artifactId>hibernate-validator</artifactId>
     </exclusion>
   </exclusions>
 </dependency>
  ....
 </dependencies>
....

The important part here is to deactivate the jersey hibernate-validation. This can be don with the ‘exclusion’ tag of maven.

Wildfly Deployment Descriptors

In the next step we need to provide two wildfly deployment descriptors to ensure that the RestEasy Subsystem is deactivated.

1. Deactivate Wildfly JAX-RS RestEasy Subsystem

To deactivate RestEasy add the file “jboss-deployment-structure.xml” into the WEB-INF-folder with the following content:

<?xml version="1.0" encoding="UTF-8"?> 
<jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.2"> 
 <deployment> 
  <exclude-subsystems> 
   <subsystem name="jaxrs" /> 
  </exclude-subsystems> 
 </deployment> 
</jboss-deployment-structure>

2. Prevent implizit bean archives without beans.xml

To deactivate bean archives with a beans.xml add the file “jboss-all.xml” into the Web-INF folder with the following content:

<jboss xmlns="urn:jboss:1.0"> 
   <weld xmlns="urn:jboss:weld:1.0" require-bean-descriptor="true"/> 
</jboss>

See also: https://docs.jboss.org/author/display/WFLY8/CDI+Reference

3. Add the bean.xml file

Finally make sure that your web project contain an empty beans.xml file in the WEB-INF folder:

<?xml version="1.0"?>
<beans xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd"
       version="1.1" bean-discovery-mode="all">
</beans>

 


Update 18.Sep. 2017

In the meantime the Ozark project worked on a new implementation to make the MVC implementation independend from Jersey. So now it is possible to user Ozark together with Wildfly without changing the configuration. You need to include the following dependencies:

 <dependency>
   <groupId>javax.mvc</groupId>
   <artifactId>javax.mvc-api</artifactId>
   <version>1.0-SNAPSHOT</version>
 </dependency>
 <dependency>
   <groupId>org.mvc-spec.ozark</groupId>
   <artifactId>ozark-resteasy</artifactId>
   <version>1.0.0-m03-SNAPSHOT</version>
 </dependency>

See als:

Wildfly – HeapSize

The default memory setting for Wildfly are not very high and can be to less for deployed applications. The default VM settings are typically:

  • -Xms64m
  • -Xmx512m
  • -XX:MaxPermSize=256m

For production mode this can be increased by editing the “bin/standalone.conf” file. To increase the memory size from 512MB to 2GB change the following line from:

JAVA_OPTS="-Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true"

into

JAVA_OPTS="-Xms2048m -Xmx2048m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true"

The settings depend on the applications deployed on wildfly.

JSF 2 and bookmarkable URLs

With JEE6 and JSF2 we finally have the great feature of bookmarkable URLs. This fixes one of the most awful faults of JSF that pages were not bookmarkable. An application can now become bookmarkable with some modifications. (See details about bookrmarkble URLs in Ryan Lubke’s Blog)

In a JSF web application you will usually have a <h:command link> or <h:commandbutton> like this:

<h:commandLink action="#{workflowController.load()}">
    <h:outputTextvalue="My action link" />
</h:commandLink>

In this example the action method returns a navigation outcome. Since JSF2 this action outcome can be also a jsf page URL (e.g /page/my_page.xhml), so no more explicit navigation rule is necessary. But the result is not bookmarkable.

To replace this with a bookmarkable URL you can replace the JSF component with the new  <h:link>. See the following example:

<h:link outcome="#{workflowController.load()}">
    <f:param name="id" value="#{workitem.item['$uniqueid']}"/>
    <h:outputText value="My action link" />
</h:link>

The trick here is, that the link is extended with a URL query param ‘id’.  Now the HTML outcome of the JSF component will look like this.

<a href="http://localhost:8080/myapp/page/my_page.xhtml?id=1">My action Link</a>

The result is simmilar to a <h:commandLink> with the difference that the URL is now set to the browsers URL. So far this new link can be bookmarked, but it is not automatically working. To resolve this new URL you have to customize your jsf page to evaluate the new query param (in this example my_page.xhml).

You can add the following metadata tag to extract the ID.

<h:head>
 <f:metadata>
 <f:viewParam name="id" value="#{workflowController.deepLinkId}" />
 </f:metadata>
</h:head>

In my example I have a method ‘getDeepLinkId’ in my workflowController Bean. The method just loads the corresponding object into my controller.

public void setDeepLinkId(String adeepLinkId) {
 this.deepLinkId = adeepLinkId;
   if (deepLinkId != null && !deepLinkId.isEmpty()) {
       // load the business object....
       this.load(deepLinkId);
       // finally destroy the deepLinkId to avoid a reload on the next
       // post event
       deepLinkId = null; // !
    }
 }

This method is necessary so in case the bookmarkable URL is requested later from a user we load the corresponding Object before we load the page.

That’s it.

h:command Buttons

The bookmarkable URLs can become a little bit tricky if you have more complex action methods. For example I used in a h:commandButton like this:

<h:commandButton action="#{workflowController.process}" value="My button">
 <f:setPropertyActionListener
    target="#{workflowController.workitem.item['$ActivityID']}"
     value="#{activity.item['numactivityid']}" />
</h:commandButton>

This is a case where replacing <h:commandButton> with <h:button> will not work out of the box. This is because the outcome property can not evaluate the action result which can – in my case – also become null. To solve this the navigation rules come back into the game. You can achieve the same result as shown in the first example with a navigation rule placed in the faces-config.xml. See the following example:

<navigation-rule>
   <navigation-case>
   <from-outcome>workitem</from-outcome>
   <to-view-id>/page/my_page.xhtml</to-view-id>
   <redirect>
     <view-param>
       <name>id</name>
       <value>#{workflowController.workitem.item['$uniqueid']}</value>
     </view-param>
   </redirect>
 </navigation-case>
</navigation-rule>

Now I moved the param ‘id’ out to the navigation rule with the identifier ‘workitem’. If my action result form the h:commandButton is ‘workitem’ the navigation rule computes the bookmarkable URL:

<a href="http://localhost:8080/myapp/page/my_page.xhtml?id=1">My action Link</a>

I hope this short examples will help you to build bookmakrable jsf applications.

Using “curl” to request the Imixs-Workflow Rest API

The command line tool ‘curl’ is useful in cases when you just want to check some REST APIs from a console. You can find a lot of information about how to use curl on curl.haxx.se.

If you want to test the Imixs Rest API you need in most cases a basic authentification against the Workflow Server. This is an example how to send username/password along with a GET request:

curl --user admin:mypassword http://localhost:8080/imixs-microservice/workflow/worklist

This examples returns the worklist for the User ‘admin’ from a Imixs-Workflow Rest Service running on localhost port 8080.

If you don’t specify the media type Imixs-Workflow will return an HTML output. You can see this also in your Browser. But Imixs-Workflow also supports the media types JSON and XML. To request the same URL in JSON you can add a Header parameter like this:

curl --user admin:mypassword -H "Accept: application/json" http://localhost:8080/imixs-microservice/workflow/worklist

or if you want to get the same response in XML format:

curl --user admin:mypassword -H "Accept: application/xml" http://localhost:8080/imixs-microservice/workflow/worklist

If you know the UniqueID of a workitem, which is included in the worklist result you can also request a single Workitem from the Imixs-Workflow. See the following curl example to request a workitem in JSON format:

curl --user admin:adminadmin -H "Accept: application/json" http://localhost:8080/imixs-microservice/workflow/workitem/14b65352f58-259f4f9b

This example returns the content of the Workitem with the UniqueID ’14b65352f58-259f4f9b’. You can also restrict the result to a subset of properties when you add the query parameter ‘items’:

curl --user admin:adminadmin -H "Accept: application/json" http://localhost:8080/imixs-microservice/workflow/workitem/14b65352f58-259f4f9b?items=txtname;$processid

See the Imixs-Workflow RestAPI for more information.

If you want to test what is possible with Imixs-Workflow REST API and curl you can try the Imixs-Microservice. Imixs-Microservice provides a full featured Workflow System based on a REST API. Imixs-Microservice also supports Docker so you do not need to install a Application Server by your self.