Kubernetes – How to map Config Files

If you are familiar with Docker than you may know that it is a common practise for Docker containers to map local config files. For example in a docker-compose.yaml file you can use the following kind of mapping:

  my-app:
    image: concourse/concourse
    ports: ["8080:8080"]
    volumes: ["./keys/web:/concourse-keys"]

In this example I map the local directory /keys/web/ into a directory /etc/config in my container. In this way my container can read config files or other kind of file data.

Kubernetes – ConfigMap

In Kubernetes there is also such a concept. And as expected for Kubernetes it is much more powerful as in plain docker. But who expects the mapping of config files is hidden behind a concept called ConfigMap?

A ConfigMap in Kubernetes is a very flexible object to be used to provide a docker container with any kind of file data. Typically you store variables as key/value pars in a config map and you can provide these key/value pairs to a Kubernetes pod for example as environment variables. But not only property files can be setup with a ConfigMap, but also public/private keys or even binary data. One way to use a ConfigMap is to publish entire directories to a pod. I will explain this in the following example:

1.) Create a local config directory

First let’s create a local config directory with some config files

$ mkdir config/
$ echo "some data" >> a.conf
$ echo "some more data" >> b.conf 

we have now two simple config files a.conf and b.conf with some data located in the directory config/.

2.) Create a Kubernetes ConfigMap

Next we expose the data form our config directory into a confgMap using the kubectl command line tool:

$ kubectl create namespace my-app
$ kubectl create configmap my-config --from-file=./conf -n my-app

Note: I first created a namespace ‘my-app’. It is recommanded to use namespaces to organize all your objects. The second command transfers the content of our config directory into a ConfigMap named ‘my-config’.

You can check the existence of your ConfigMap

$ kubectl get configmaps -n my-app
NAME               DATA   AGE
my-config          6      18m

and you can also verify the content of your new config map:

$ kubectl describe configmap my-config -n my-app

here you can see that Kubernetes maps each file as a separate key.

3.) Map you configMap as a Directory

Now as you have build your ConfigMap you can deploy a new application using the ConfigMap as a ‘local’ file directory. See the following example:

apiVersion: apps/v1
kind: Deployment

metadata:
  name: app
  namespace: my-app
  labels: 
    app: app

spec:
  replicas: 1
  selector: 
    matchLabels:
      app: app

  template:
    metadata:
      labels:
        app: app
    spec:
      containers:
        image: concourse/concourse
        name: concourse

        # Mount the volume that contains the configuration data 
        # into your container filesystem
        volumeMounts:
        - name: my-configmap-volume
          mountPath: /etc/config
        ports:
        - containerPort: 8080

      # Add the ConfigMap as a volume to the Pod
      volumes:
      - name: my-configmap-volume
        configMap:
          name: my-config

With the volumeMounts I map the content of my configMap into a directory within my container. So this deployment object will start the container ‘concourse’ and maps the content of my configMap into the filesystem of the docker container under the path /etc/config .

You can verify this by executing a ls command into your running pod:

$ kubectl exec concourse-7bd49d858-mw2zn -n my-app  -- ls /etc/config/
a.conf
b.conf

Conclusion

As you can see, it is very easy to map local config files to a container running a Kubernetes. Even if you wouldn’t expect the mapping of local files behind the term ConfigMap. The great advantage in compare to native docker-compose is that the configMap is available on all worker nodes. So there is no need to provide the files manually on each worker node within you cluster. Kubernetes will automatically provide the files for you to the worker node were your pod is scheduled.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.