When I started migrating my Application servers (Wildfly 20.0.1) into a self-managed Kubernetes cluster, I noticed unexpected memory behaviour. My Wildfly containers were consuming more memory as I expected. In this blog I will explain why this may happen and how you can control and optimize your memory settings. In this blog I am using the official Wildfly 20.0.1 which is based on OpenJDK 11. But the rules explained here can be of course adapted also for any other Java Application Server.
Notice: Since Java 10 the memory management of a container changed dramatically. Before Java 10 a JVM running in Docker looked on the memory setting of the host which typically provided much more memory as defined by the single Docker container. Here we look only on Java version 10 and above! Read this blog to learn more about the background.
Setting Memory Limits in Kubernetes
If you deploy a Docker Container running a Java application in Kubernetes you can define the memory limits in your deployment defintion. The following yaml file shows a setup for Wildfly 20.0.1 with a maximum limit of 2Gi:
apiVersion: apps/v1
kind: Deployment
metadata:
name: wildfly
labels:
app: wildfly
spec:
replicas: 1
selector:
matchLabels:
app: wildfly
strategy:
type: Recreate
template:
metadata:
labels:
app: wildfly
spec:
containers:
- name: wildfly
image: jboss/wildfly:20.0.1.Final
ports:
- name: web
containerPort: 8080
- name: admin
containerPort: 9990
# Memory Request and Limits
resources:
requests:
memory: "512Mi"
limits:
memory: "1Gi"
This will set the inital memory limit to 512 MB and the maximum limit to 1 GB. So the container should start with a minimum of 512MB and is allowed to consume a maximum memory limit of 1GB. If the container for some reasons takes more memory Kubernetes will evict the container and restart it.
How to Control Memory Consumption in a JVM
Before you start changing any more settings it is important to control how much memory your container grabs. To control the memory there are some useful commands:
First you can check memory usage with kubectl top:
-$ kubectl top pod wildfly-5df66f48b8-4djbs
NAME CPU(cores) MEMORY(bytes)
wildfly-6cdc475445-p6kjg 5m 435Mi
Next you can verify if the container sees the memory limit of 1Gi as defined in the deployment. For that open a shell in your running container and run:
[jboss@wildfly-6cdc475445-p6kjg ~]$ cat /sys/fs/cgroup/memory/memory.limit_in_bytes
1073741824
1073741824 bytes are exactly the defined limit of 1Gi.
Java 10 Memory Settings
Since Java 10 the JVM supports the memory settings of a container as we have defined the limits in the kubernetes deployment before. If you run a Java application like Wildfly in a Linux container the JVM will automatically detect the Control Group memory limit with the UseContainerSupport
option. You can control the memory with the following options, InitialRAMPercentage
, MaxRAMPercentage
and MinRAMPercentage
.
To check the CGroupSupport in your running Java container you can run
$ java -XX:+PrintFlagsFinal -version | grep -E "UseContainerSupport | InitialRAMPercentage | MaxRAMPercentage | MinRAMPercentage | MaxHeapSize"
double InitialRAMPercentage = 1.562500 {product} {default}
size_t MaxHeapSize = 268435456 {product} {ergonomic}
double MaxRAMPercentage = 25.000000 {product} {default}
double MinRAMPercentage = 50.000000 {product} {default}
bool UseContainerSupport = true {product} {default}
openjdk version "11.0.8" 2020-07-14 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.8+10-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.8+10-LTS, mixed mode, sharing)
The -XX:InitialRAMPercentage
is used to calculate initial heap size when InitialHeapSize / -Xms
is not set. Both -XX:MaxRAMPercentage
and -XX:MinRAMPercentage
are used to calculate maximum heap size when MaxHeapSize / -Xmx
is not set. The default setting for Heap is 25% of the upper memory limit which is in our case 268435456 bytes (256MB)
You can change MaxRAMPercentage by setting additional JVM_OPS in your container:
.....
spec:
containers:
- name: wildfly
image: jboss/wildfly:20.0.1.Final
env:
- name: JAVA_OPTS
value: "-XX:MaxRAMPercentage=75.0"
....
If you apply the new java options to your kubernetes Deplyoment you will see the changed output:
Note: Here we we are setting a JVM flag for wildfly. As a result this setting will change the behaviour of the Wildfly server instance and not for the container. To verify this we need to take a look inside the JVM from a running java process.
Control Memory from Within the JVM
To check the memory behaviour of wildfly (the running java application) in more detail you can use the Java command line tools. First you need to figure out the PID of your running java process from the shell of your container:
$ jcmd -l
81 /opt/jboss/wildfly/jboss-modules.jar -mp /opt/jboss/wildfly/modules org.jboss.as.standalone -Djboss.home.dir=/opt/jboss/wildfly -Djboss.server.base.dir=/opt/jboss/wildfly/standalone -b 0.0.0.0
215 jdk.jcmd/sun.tools.jcmd.JCmd -l
In this example the PID is 81. With the PID and the tool ‘jinfo’ we can print out the VM settings:
$ jinfo -flags 81
VM Flags:
-XX:CICompilerCount=2 -XX:CompressedClassSpaceSize=260046848 -XX:InitialHeapSize=67108864 -XX:MaxHeapSize=536870912 -XX:MaxMetaspaceSize=268435456 -XX:MaxNewSize=178913280 -XX:MetaspaceSize=100663296 -XX:MinHeapDeltaBytes=196608 -XX:NewSize=22347776 -XX:NonNMethodCodeHeapSize=5825164 -XX:NonProfiledCodeHeapSize=122916538 -XX:OldSize=44761088 -XX:ProfiledCodeHeapSize=122916538 -XX:ReservedCodeCacheSize=251658240 -XX:+SegmentedCodeCache -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseSerialGC
Heap Details
So far, we have only checked the guardrails on which the memory consumption of our container should be directed. To get more information about memory usage and the heap memory (which is most interested for your application) you can run:
$ jcmd 81 GC.heap_info
81:
def new generation total 28032K, used 4897K [0x00000000e0000000, 0x00000000e1e60000, 0x00000000eaaa0000)
eden space 24960K, 19% used [0x00000000e0000000, 0x00000000e04c8670, 0x00000000e1860000)
from space 3072K, 0% used [0x00000000e1860000, 0x00000000e1860000, 0x00000000e1b60000)
to space 3072K, 0% used [0x00000000e1b60000, 0x00000000e1b60000, 0x00000000e1e60000)
tenured generation total 61904K, used 37141K [0x00000000eaaa0000, 0x00000000ee714000, 0x0000000100000000)
the space 61904K, 59% used [0x00000000eaaa0000, 0x00000000ecee54b0, 0x00000000ecee5600, 0x00000000ee714000)
Metaspace used 54890K, capacity 62960K, committed 63616K, reserved 309248K
class space used 7137K, capacity 9716K, committed 9856K, reserved 253952K
This data will change during runtime, as the garbage collector will consume memory or free unused memory over time. To get a impression what is happening over time you can use jstats to print out a permanent memory information
$ jstat -gc 81 10s
S0C S1C S0U S1U EC EU OC OU MC MU CCSC CCSU YGC YGCT FGC FGCT CGC CGCT GCT
3072.0 3072.0 0.0 0.0 24960.0 8393.5 61904.0 37141.2 63616.0 54876.8 9856.0 7137.3 27 0.387 1 0.218 - - 0.605
3072.0 3072.0 0.0 0.0 24960.0 8584.1 61904.0 37141.2 63616.0 54876.8 9856.0 7137.3 27 0.387 1 0.218 - - 0.605
3072.0 3072.0 0.0 0.0 24960.0 8774.8 61904.0 37141.2 63616.0 54876.8 9856.0 7137.3 27 0.387 1 0.218 - - 0.605
This command will print an update line with all memory details every 10 seconds.
VM Metaspace Info
Beside the heap memory there is a lot of so called non-heap memory used by your jvm. To get the details about this memory run:
jcmd 68 VM.metaspace