Traefik and Basic Authentication

Today I once again came across a configuration issue in traefik.io regarding an authentication problem. Traefik is a cloud native networking solution for container platforms. It can be used for example within Kubernetes and is a build in function of K3S – a lightweight Kubernetes solution.

In K3S Traefik is used for the ingress configuration. For example to route Web traffic from an Internet domain to a specific service within your cluster.

My problem was that I wanted to install a private Docker Registry within my K3S cluster. The Docker Registry comes without any security. This is fine within a cluster, but in case you connect from outside you don’t want that your private registry is open for everyone. With Traefik you can easily secure you service. I will explain how you can do this.

Continue reading “Traefik and Basic Authentication”

Cassandra – Upgrade from Version 3.11 to 4.0

In my last blog post ‘Setup a Public Cassandra Cluster with Docker‘ I described how to setup a Cassandra Cluster with docker in a public network. The important part of this blog post was how to secure the inter-node and client-node communication in such a scenario. In this bog post I will just cover some details about migrating from version 3.11 to version 4.0.

General Upgrade from 3.x to 4.0

In general it is quite simple to upgrade a Cassandra Node form version 3.x to 4.0 because the version 4.0 can handle the table files from version 3. So at least you need to change your Docker run command pointing to a 4.0 version:

docker run --name cassandra -d \
        -e CASSANDRA_BROADCAST_ADDRESS=<YOUR-PUBLIC-IP> \
        -e CASSANDRA_SEEDS=<COMMA SEPARATED IP LIST OF EXISTING NODES> \
        -p 7000:7000 \
        -p 9042:9042 \
        -v ~/cassandra.yaml:/etc/cassandra/cassandra.yaml\
        -v ~/cqlshrc:/root/.cassandra/cqlshrc\
        -v ~/security:/security\
        -v /var/lib/cassandra:/var/lib/cassandra\
        --restart always\
        cassandra:4.0.6

The cassandra.yaml File

Before you can start the new Cassandara node, you need to update the cassandra.yaml file.

First I recommand to start a local cassandra docker container and copy the origin cassandra.yaml file from the running container. This is necessary because a lot of parameters and settings have change form version 3.x to 4.0

Now you can tweak the cassandra.yaml file. In parallel you can check your current cluster configuration from a running node with docker:

docker exec -it cassandra cat /etc/cassandra/cassandra.yaml

First take care about the following parameters which should be set to your previous configuration settings:

  • cluster_name
  • num_tokens
  • authenticator
  • seed_provider
  • listen_address (usually out comment)
  • broadcast_address
  • broadcast_rpc_address

If you use the server_encryption_options as explained in my last post you need take care about the following sections:

...
server_encryption_options:
    #internode_encryption: none
    internode_encryption: all
    enable_legacy_ssl_storage_port: true
    keystore: /security/cassandra.keystore
    keystore_password: mypassword
    truststore: /security/cassandra.truststore
    truststore_password: mypassword

# enable or disable client/server encryption.
client_encryption_options:
    enabled: true
    optional: false
    keystore: /security/cassandra.keystore
    keystore_password: mypassword
    require_client_auth: false
....
# enable password authentication!
authenticator: PasswordAuthenticator
...

The important change is in the new parameter ‘enable_legacy_ssl_storage_port‘ which need to be set to ‘true’ during migration.

Expose Port 7000

Since version 4.0 the port 7001 is deprecated. This port was used in older version for the encrypted inter-node communication. Now port 7000 is handling both – encypted as also unencrypted communication. So it is sufficient to expose port 7000 now for inter-node communication.

But as long as your cluster contains nods running with version 3.11 you need to set the new parameter ‘enable_legacy_ssl_storage_port‘ to ‘true’. This parameter tells your 4.0 node to use still port 7001 when connecting to older nodes.

    # When set to true, encrypted and unencrypted connections are allowed on the storage_port
    # This should _only be true_ while in unencrypted or transitional operation
    # optional defaults to true if internode_encryption is none
    # optional: true
    # If enabled, will open up an encrypted listening socket on ssl_storage_port. Should only be used
    # during upgrade to 4.0; otherwise, set to false.
    enable_legacy_ssl_storage_port: true

Note: The parameter ‘enable_legacy_ssl_storage_port‘ is only needed as long as your cluster has nodes running in version 3.x. Later you ignore this param. Which is typically only during the migration phase.

If you have completed the settings you can start the node again in version 4.0.6.

Java – DataStax Driver

If you have a Java client using the DataStax Java Driver to connect to your Cassandra Cluster make sure hat you use the latest Driver verson:

<!-- DataStax Java Driver -->
<dependency>
	<groupId>com.datastax.cassandra</groupId>
	<artifactId>cassandra-driver-core</artifactId>
	<!-- for cassandra 4.0 use 3.11.3 or later -->
	<version>3.11.3</version>
	<scope>compile</scope>
</dependency>

Firewall

If you are running a firewall as explained in my last post you need take care about the new port settings. Port 7001 should no longer be needed.

Setup a Public Cassandra Cluster with Docker

UPDATE: I updated this origin post to the latest Version 4.0 of Cassandra.

In one of my last blogs I explained how you can setup a cassandra cluster in a docker-swarm. The advantage of a container environment like docker-swarm or kubernetes is that you can run Cassandra with its default settings and without additional security setup. This is because the cluster nodes running within a container environment can connect securely to each other via the kubernetes or docker-swarm virtual network and need not publish any ports to the outer world. This kind of a setup for a Cassandra cluster can be fine for many cases. But what if you want to setup a Cassandra cluster in a more open network? For example in a public cloud so you can access the cluster form different services or your client? In this case it is necessary to secure your Cassandra cluster.

Continue reading “Setup a Public Cassandra Cluster with Docker”

Wildfly – Elytron – LDAP SecurityDomains for Active Directory

In one of my last blog posts I explained how to setup a Security Domain in Wildfy Elytron – the new security module. In this blog post I explain how to setup a LDAP security domain for the Active Directory:

The ldap-realm

As explained in my last blog you have to define a security-domain and a security-realm in two separate sections. The following example shows the LDAP configuration to resolve users and roles form an Active Directory. I have reduced the non relevant parts:

       <subsystem xmlns="urn:wildfly:elytron:14.0" final-providers="combined-providers" disallowed-providers="OracleUcrypto">
            .....
            <security-domains>
                .....                
              	<!-- My LDAP domain   -->
		<security-domain name="mydomain" default-realm="cached-ldap" permission-mapper="default-permission-mapper">
		   <realm name="cached-ldap"/>
		</security-domain>				                
            </security-domains>
            <security-realms>
                .....                
                <!-- my LDAP realm -->
		<ldap-realm name="ldap-realm" dir-context="ldap-connection" direct-verification="true">
			<identity-mapping rdn-identifier="sAMAccountName" use-recursive-search="false" search-base-dn="CN=users,DC=intern,DC=foo,DC=de" >
			  <attribute-mapping>
			   <attribute from="CN" to="Roles" filter="(member={1})" filter-base-dn="CN=users,DC=intern,DC=foo,DC=de"/>
			  </attribute-mapping>
			</identity-mapping>
		</ldap-realm>
		<caching-realm  name="cached-ldap" realm="ldap-realm"/>			    
            </security-realms>
            
            <!-- LDAP Dir Contexts -->
            <dir-contexts>
		<dir-context name="ldap-connection" url="ldap://my-ldap:389" principal="CN=bind_user,CN=users,DC=intern,DC=foo,DC=de">
		     <credential-reference clear-text="YOUR-PASSWORD"/>
		</dir-context>
    	    </dir-contexts>
.....
       

You have to tweak the dir-context and the base-dn in the example above to your LDAP settings. The setup searches for the sAMAccountName as the UserID and searches the roles in the ‘members’ attribute of the user entry.

The cached-ldap

The important part is the ‘cached-ldap‘ security realm. In older versions of Wildfly the ldap realm uses a cache per default. In the new version you need to define a cache by yourself. This is what the cached-ldap is good for. Your security domain points to the cached-ldap and the cached-ldap points to the ldap realm. If you don’t use this, you will see a lot of ldap requests against your directory.

You can also add attributes to setup the default caching like:

<caching-realm name="cached-ldap" realm="ldap-realm" maximum-age="300000" />

Find details here.

Logging

For debugging it is helpful if you change the loglevel for org.wildfly.security. For this you simply add the following logger into the subsystem logging:

        <logger category="org.wildfly.security">
	    <level name="DEBUG"/>
	</logger>

And also set the log level from “INFO” to “DEBUG” for your console handler. This setting will give you more insights of what is happening in the background.

From a server bash you can ‘tail’ the security messages like this:

$ tail -f /opt/jboss/wildfly/standalone/log/server.log  | grep "security"

Role Mapping

In some cases it maybe necessary to map LDAP Group names to specific role names within your application. There for you can use the mappers. See the following example which maps the LDAP Group name ‘imixs_users’ to the application role ‘org.imixs.ACCESSLEVEL.AUTHORACCESS’:

            <security-domains>
               ....
		<security-domain name="imixsrealm" default-realm="cached-ldap" permission-mapper="default-permission-mapper">
		 <realm name="cached-ldap" role-mapper="imixs-user-rolemapper" />
		</security-domain>	                
            </security-domains>
.....
            <mappers>
                .....              
                <regex-role-mapper name="imixs-user-rolemapper" pattern="imixs_user" replacement="org.imixs.ACCESSLEVEL.AUTHORACCESS" keep-non-mapped="true"/>
            </mappers>
....

The mapper is referred in the security-domain. I am using a regex role mapper to replace the role name. You will find more background about this role mapping here.

Jakarta EE9 – Wildfly – Elytron – SecurityDomains

With version 11 Wildfly introduced a complete new security concept named ‘Elytron’. This security concept is a little bit confusing on the first look if you have worked with previous versions of Wildfly. To be honest I personally recognized the Elytron framework with version Wildfly 24. Even it it is well documented it took me a while until I get things working. Initially I came across the configuration concept during migrating the Imixs-Workflow project form Jakarta EE8 to Jakarta EE9. As we are using docker images to run our applications we are configuring the Wildfly server by the standalone.xml file and not via the CLI provided by Wildfly. In the following I will show what is important to get a Jakarta EE9 application work with Elytron.

The Elytron Subsystem

Wildfly is separated in its core into subsystems. Each subsystem has its own configuration section in the standalone.xml file. For the Elytron subsystem this is urn:wildfly:elytron:14.0.

If you look into the subsystem configuration you can see that a security domain is split now into the domain and the realm section. A simple FileBased security realm with the realm name ‘imixsrealm’ will look like this:

        <subsystem xmlns="urn:wildfly:elytron:14.0" final-providers="combined-providers" disallowed-providers="OracleUcrypto">
.....
            <security-domains>
.....
              	<!-- imixsrealm filerealm configuration   -->
		<security-domain name="imixsrealm" default-realm="imixsrealm" permission-mapper="default-permission-mapper">
			<realm name="imixsrealm"/>
		</security-domain>
            </security-domains>
            <security-realms>
....
                <!-- imixsrealm filerealm property files -->
                <properties-realm name="imixsrealm" groups-attribute="Roles">
			<users-properties path="sampleapp-users.properties" relative-to="jboss.server.config.dir" digest-realm-name="Application Security" plain-text="true"/>
			<groups-properties path="sampleapp-roles.properties" relative-to="jboss.server.config.dir"/>
		</properties-realm>              
            </security-realms>
.....
        </subsystem>

I added a security-domain with the name ‘imixsrealm’ and also a properties-realm section with the same name where I define the users and roles property files. The attribute plain-text="true" indicates that you store the password in plaintext, which makes testing much easier. Place the sample-app-roles and users property files into the standalone/config/ directory. Do not modify the other sections of the Elytron subsystem!

The content of the sampleapp-users.properties looks like this (with plain text passwords)

admin=adminadmin
manfred=password
anna=password

In the file sampleapp-roles.properties you can assign users to application specific roles:

admin=MANAGERACCESS
manfred=MANAGERACCESS
anna=AUTHORACCESS

So far everything seems to look similar to the old security-domain configuration. But at this moment you new security domain wont work. There are additional steps needed.

The EJB and Web Subsystems

To get the security domain working with your application you need to add the security domain also to the undertow web subsystem. In this subsystem you will find a section ‘application-security-domains’. And in this section you need to add your new security domain as well:

       <subsystem xmlns="urn:jboss:domain:undertow:12.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other" statistics-enabled="${wildfly.undertow.statistics-enabled:${wildfly.statistics-enabled:false}}">
....
            <application-security-domains>
                <application-security-domain name="imixsrealm" security-domain="imixsrealm"/>
                <application-security-domain name="other" security-domain="ApplicationDomain"/>                                            
            </application-security-domains>
        </subsystem>

There is also a subsystem for EJBs “ejb3:9.0” and it becomes important that you add your security domain also there if you have EJBs with the annotations @RolesAllowed or @RunAs

        <subsystem xmlns="urn:jboss:domain:ejb3:9.0">
...
            <default-security-domain value="other"/>
            <application-security-domains>
                 <application-security-domain name="imixsrealm" security-domain="imixsrealm"/>
                <application-security-domain name="other" security-domain="ApplicationDomain"/>                
            </application-security-domains>
...
        </subsystem>

Now you have completed your configuration in the standalone.xml file

The jboss-web.xml and jboss-ejb3.xml

There are still 2 application specific files which need to be part of your web application.

In the jboss-web.xml you define you custom security domain:

<?xml version="1.0" encoding="UTF-8"?>
<jboss-web>
	<context-root>/</context-root>	
	<security-domain>imixsrealm</security-domain>
</jboss-web>

and in the jboss-ejb3.xml file:

<?xml version="1.1" encoding="UTF-8"?>
<jboss:ejb-jar xmlns:jboss="http://www.jboss.com/xml/ns/javaee"
	xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xmlns:s="urn:security:1.1"
	xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-ejb3-2_0.xsd http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/ejb-jar_3_1.xsd"
	version="3.1" impl-version="2.0">

	<assembly-descriptor>
		<s:security>
			<ejb-name>*</ejb-name>			
			<s:security-domain>imixsrealm</s:security-domain>
			<!-- This configuration is necessary to enable @runAs for the AdminPService  -->
			<s:missing-method-permissions-deny-access>false</s:missing-method-permissions-deny-access>
		</s:security>
	</assembly-descriptor>

</jboss:ejb-jar>

So finally your Jakarta EE9 application should now deploy and run within Wilfly 24 using the new Elytron Security Framework.

Database Realm / jdbc-realm

It is also easy to use jdbc-realm configuration. You can find general information about Database realms here. The following shows an example how to configure a jdbc-realm with two tables storing an encrypted password and user roles.

<jdbc-realm name="imixsrealm">
    <principal-query sql="select PASSWORD from USERID where ID=?" data-source="office">
        <simple-digest-mapper algorithm="simple-digest-sha-256" password-index="1" hash-encoding="hex"/>
    </principal-query>
          <principal-query sql="select GROUP_ID from USERID_USERGROUP where ID=?" data-source="office">
              <attribute-mapping>
                  <attribute to="Roles" index="1"/>
              </attribute-mapping>
    </principal-query>
</jdbc-realm>           

Note: I am using two queries here as my role definitions are stored in a separate table (USERID_USERGROUP). The password is stored in hex format encrypted with a SHA-256 algorithm.

NFS and Iptables

These days I installed a NFS Server to backup my Kubernetes Cluster. Even as I protected the NSF server via the exports file to allow only cluster members to access the server there was still a new security risk. NSF comes together with the Remote Procedure Call Daemon (RPC). And this daemon enables attackers to figure out information about your network. So it is a good idea to protect the RPC which is running on port 111 from abuse.

To test if your server has an open rpc port you can run a telnet from a remote node:

$ telnet myserver.foo.com 111
Trying xxx.xxx.xxx.xxx...
Connected to myserver.foo.com.

This indicates that rpc is visible from the internet. You can check the rpc ports on your server also with:

$ rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper

Iptables

If you run Kubernetes or Docker on a sever you usually have already Iptables installed. You can test this by listing existing firewall rules. With the option -L you can list all existing rules:

$ iptables -L 
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy DROP)
target     prot opt source               destination         
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:9042
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:afs3-callback

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere

This is an typical example you will see on a sever with Docker daemon installed. Beside the three default chains ‘INPUT’, ‘FORWARD’ and ‘OUTPUT’ there are also some custom Docker chains describing the rules.

So the goal is to add a new CHAIN containing rules to protect the RPC daemon from abuse.

Backup your Origin iptables

Before you start adding new rules make a backup of your origin rule set:

$ iptables-save > iptables-backup

This file can help you if something goes wrong later…

Adding a RPC Rule

If you want to use RPC in the internal network but prohibit it from the outside, then you can implement the following iptables. In this example I explicitly name the cluster nodes which should be allowed to use RPC port 11. All other request to the PRC port will be dropped.

Replace [SEVER-NODE-IP] with the IP address from your cluster node:

$ iptables -A INPUT -s [SERVER-NODE-IP] -p tcp --dport 111 -j ACCEPT
$ iptables -A INPUT -s [SERVER-NODE-IP] -p udp --dport 111 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 111 -j DROP
$ iptables -A INPUT -p udp --dport 111 -j DROP

This rule explicitly allows SEVER-NODE-IP to access the service and all other clients will be drop. You can easily add additional Nodes before the DROP.

You can verify if the new ruleset was added to your existing rules with:

$ iptables -L

You may write a small bash script with all the iptables commands. This makes it more convenient testing your new ruleset.

Saving the Ruleset

Inserting new rules into the firewall carries some risk by its own. If you do something wrong you can lockout your self from your sever. For example if you block the SSH port 22.

The good thing is, that rules created with the iptables command are stored in memory. If the system is restarted before saving the iptables rule set, all rules are lost. So in the worst case you can reboot your server to reset your new rules.

If you have tested your rules than you can persist the new ruleset.

$ iptables-save > iptables-newruleset

After a reboot yor new rules will still be ignored. To tell debian to use the new ruleset you have to store this ruleset into /etc/iptables/rules.v4

$ iptables-save > /etc/iptables/rules.v4

Finally you can restart your server. The new rpc-rules will be applied during boot.

The Attack on Our Freedom as Computer Users

“Meltdown and Spectre are errors. Grave errors, to be sure, but not evidently malicious. Everyone makes mistakes.
Intel has done far worse with its CPUs than make a mistake. It has built in an intentional back door called the Management Engine.
Important as these bugs are, don’t let Intel’s mistakes distract you from Intel’s deliberate attack!”

by Free Software Foundation president Richard Stallman

 

With security issues like the Spectre and Meltdown vulnerabilities discovered in Intel chips in early 2018, it became more important than ever to talk about the necessity of software freedom in these deeply embedded technologies. Serious as though these bugs may be, we cannot let them distract us from the broader issues: Intel considers the Intel Management Engine a feature, while it’s nothing more than a threat to user freedom. Take a look to Denis GNUtoo Carikli article which provides a new basis for that conversation.