JSF Best Practice

I do developing with Java Server Pages (JSF) since the early beginning with version 1.0. Today , 17 years later,  we have version 2.3 and thus a lot of improvements and new features.  I’ve made many mistakes in using this standard web framework in my own applications. And therefore I would like to show some best practice rules in this Blog. And I hope this will simplify the work with JSF – for you and also for me – during refactoring my own applications. Most of which I write here comes from the great book ‘ The Definitive Guide to JSF in Java EE 8’ written by Bauke Scholtz (BalusC) and Arjan Tijms. I recommend you to buy this book if you have had already made experience with JSF.  So let’s start…

Continue reading “JSF Best Practice”

Manage Big Data With Apache Cassandra

In this article, I will share my experience with Cassandra and how you can manage big data in an effective way.  Apache Cassandra is a high-performance, extremely scalable, fault-tolerant (i.e., no single point of failure), distributed non-relational database solution. But Cassandra differs from SQL and RDBMS in some important aspects. If, like me, you come from the world of SQL databases, it’s hard to understand Cassandra’s data concept. It took me several weeks to do so.  So let’s see what is the difference. Continue reading “Manage Big Data With Apache Cassandra”

An Alternative to Kubernetes

Kubernetes is an container-orchestration system which helps you to automate your deployment, scaling and management of containerized applications. Originally this platform was designed by Google and is today part of the Cloud Native Computing Foundation. Kubernetes is surely one of the major providers in the market of container operating systems.

But what many do not know, is the complexity of this platform if used in smaller projects. To understand this you need to know, that Kubernetes was designed for the operation of large cloud environments as they are operated by Google, Amazon or Microsoft.  This means that with the help of Kubernetes you can not only manage one server, but hundreds of servers with thousands of services. For most projects, this power is superfluous. Continue reading “An Alternative to Kubernetes”

WWW Inventor Tim Berners-Lee Launches a Project to Save the Internet

The WWW Inventor Tim Berners-Lee Launches a new  open platform called “Solid”. With Solid, users can share their data with others without having to surrender their sovereignty to a group. Users should be able to decide for themselves who can access the data and which apps will be used.

To solve the problem of how to control personal data, in Solid all data is stored in a so called Solid POD. This Solid POD can be in your house or workplace, or with an online Solid POD provider of your choice. Since you control your server you own your data. You’re free to move it at any time, without interruption of service.

In my opinion, this is the only sensible solution to return data control back to the user. I hope this project gets enough attention. Your data is too serious to ignore.

Use Docker Instead of Kubernetes

Today we are all talking about Containers and container based infrastructure. There is a lot of hype and noise about this topic. But what is this container technology? And how does it solve today problems? I am using containers by myself and of course I am fascinated from this server technology. Containers can really simplify things. After more than 20 years in building server applications I have experienced many problems very closely. I call it “server technology“, which may sound a little strange to some. Are containers not more of a cloud technology? And this is the one thing that really bothers me is this current hype. When I talk about containers many people think about this Kubernetes thing. And this was the impulse to write this article. Continue reading “Use Docker Instead of Kubernetes”

How to use Traefik.io as Static Proxy

Traefik.io is a very cool open source project, providing a powerful reverse proxy. The project is focusing mainly on container based architectures like Docker Swarm. In such an environment Traefik.io is able to recognize new containers in a network and dynamically computes the route from the frontend to the corresponding backend service. I wrote about this functionality in combination with docker swarm already in my blog: Lightweight Docker Swarm Environment. This concept is also part of the Imixs-Workflow project.

But what if you just want to add a kind of static route, which has nothing to do with container based services. I had this situation as I wanted to redirect incoming requests for a specific host name to an external server – outside of my docker swarm.

To realize this, you can add a front-end rule under the section [file] at the end of your traefik.toml file. This is an example how such a rule can looks like:

...
[file]

[backends]
 [backends.backend1]

 [backends.backend1.servers]
   [backends.backend1.servers.server0]
   url = "http://some.host.de:12345"
   # note that you cannot add path in 'url' field
 
[frontends]
  [frontends.frontend1]
  entryPoints = ["http"]
  backend = "backend1"
  passHostHeader = true
  [frontends.frontend1.routes]
    [frontends.frontend1.routes.route0]
    rule = "Host:www.myweb.com"

This rule proxies requests for “www.myweb.com” to the host “some.host.de:12345”. See also the discussion here.

Running Hadoop with Docker Containers

If you play around with Apache Hadoop, you can hardly find examples build on Docker. This is because Hadoop is rarely operated via Docker but mostly installed directly on bare metal. Above all, if you want to test built-in tools such as HBase, Spark or Hive, there are only a few Docker images available.

A project which fills this gap comes from the European Union and is named BIG DATA EUROPE. One of the project objectives it to design, realize and evaluate a Big Data Aggregator Platform infrastructure.

The platform is based on Apache Hadoop and competently build on Docker. The project offers basic building blocks to get started with Hadoop and Docker and make integration with other technologies or applications much easier.  With the Docker images provided by this project, a Hadoop platform can be setup on a local development machine, or scale up to hundreds of nodes connected in a Docker Swarm. The project is well documented and all the results of this project are available on GitHub.

For example, to setup a Hadoop HBase local cluster environment takes only a few seconds:

$ git clone https://github.com/big-data-europe/docker-hbase.git
$ cd docker-hbase/
$ docker-compose -f docker-compose-standalone.yml up
Starting datanode
Starting namenode
Starting resourcemanager
Starting hbase
Starting historyserver
Starting nodemanager
Attaching to namenode, resourcemanager, hbase, datanode, nodemanager, historyserver
namenode | Configuring core
resourcemanager | Configuring core
.........
..................