Our journey through containerization #2

Our journey through containerization #2

Autor: Kerrim Abd El Hamed

The last few months were very busy for us at MuK IT. We acquired some big customers and developed a lot of cool stuff for our clients and our own infrastructure. While doing all this I sadly didn’t find any time to write new blog posts. Finally I was able to get back to writing (mostly in my holidays) and worked on some new articles I’m passionate about.

In the last article about containerization I talked about storage persistance and how we solved this with multiple odoo containers running in clusters and sharing the same data. In this article I want to talk about networking and why we switched from docker with docker-compose to kubernetes.

With a single machine we had one big docker-compose file to handle all our containers. This was managed by gitlab and deployed automatically to our server. Connecting our services was easy as docker-compose lets you access the different containers with their service name. For example if you want nginx to route to odoo and your docker-compose file looks like this:

version: '3'
services:
  nginx:
    image: nginx
    ...
  odoo:
    image: odoo
    ...

You can simply setup an upstream to odoo:8069. This will be translated to the right ip address of the odoo container. When deploying a docker-compose file you can see a line that looks similar to this:

Creating network "server_default" with the default driver

This tells you that docker-compose creates a network for your stack where it later adds the containers from your compose file. You can list all your networks by typing docker network ls. docker network describe will print some information about your network.

So as far as we are on one machine that works quite well, but as soon as you have to spread your containers over multiple servers you will run into some difficulties. We wanted to communicate with the containers without having to deal with exposed ports on the host. Additionaly we wanted to be able to spread multiple instances of one container to multiple machines. This is necessary to enable load balancing.

To achieve this all instances have to be reachable by the same name. For example a call to odoo:8069 needs to be translated on a network layer and routed to one of the instances. The same call can be routed to a different instance a moment later so they all have to share the same state. Thats when we started to think about switching to kubernetes.

There are some alternatives to kubernetes like docker swarm and nomad but after playing around with these solutions we chose kubernetes for its neatless integration with Google Cloud Platform (That we are really happy with!).

In kubernetes you can use multiple nodes (servers) and distribute your pods (for simplicity lets say they are containers) on them. Pods can be described in deployments:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: odoo
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: odoo
    spec:
      containers:
      - name: odoo
        image: odoo:12.0
        imagePullPolicy: Always
        ports:
        - containerPort: 8069
        - containerPort: 8072

To access this pods from anywhere in your cluster you need to define a service:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: odoo
  name: odoo
spec:
  ports:
    - name: http
      port: 8069
      targetPort: 8069
      protocol: TCP
    - name: longpolling
      port: 8072
      targetPort: 8072
      protocol: TCP
  selector:
    app: odoo

When sending a request to the service it will route you to one of the running pods. This way the other pods don’t need to know anything about a specific running container.

To manage this configurations we use git and kubectl. But there are some very cool tools for managing kubernetes. One being Googles Cloud Platform itself and one of my favorites Kubernetic.