Our journey through containerization #1

Our journey through containerization #1

Autor: Kerrim Abd El Hamed

As a young start up we had to learn a lot about deployment strategies and running services in the cloud. We started our journey with a single server instance and shifted to docker and after some time to kubernetes. One main reason for this was our growing interest in offering customized Odoo instances as a SaaS solution.

There are a lot of articles and comments on docker and the advantages of containers. So I’m not going to talk about that. This is also not intended to be a tutorial on how to setup anything. I just want to summarize my experience and share my thoughts on how we deal with services and how this changed over time. This is the first article in a series about our story with services.

When we first started our company we only had two customers. One of them wanted a self hosted system for some business stuff and the other one needed a cloud solution. Adding the few services we were using this was easy hosted and maintained on a single server. As our needs grew (but our manpower didn’t) we had to rethink how we manage services and server. This was the first time we rebuilt our whole infrastructure.

At this time our used software stack looked something like this:

  • Odoo
  • Gitlab
  • Jenkins
  • MediaWiki

We used nginx as proxy and had to maintain a PostgreSQL and a MySQL server.

I remember installing everything on our server and sitting nights looking for errors in config files. Especially in nginx… At the same time I was testing a lot of other services and if we could use them. I tried them out locally and if I liked one I installed it on our server. Most of the things I deployed were really cool tools but no one used them so we decided to remove them again. Installing and removing software on a server isn’t that easy and consumes a lot of time.

This was when we decided to try docker. I heard about docker before but I didn’t think that I would need it. Playing around I started using it to test software and after some time we decided to use it with all our services. Doing this the first time we had to learn a lot about how docker works, what images and containers are, how they communicate with the outside world and what happens with the data.

Where are my files?!

The first problem we encountered was that docker doesn’t save anything on your disk if you don’t tell it to. Docker runs in its own environment and if you delete the pod everything is gone. If you want to persist (persistence is just a fancy word for saving files) your data you need something called volumes. Here is a little example docker-compose file from one of our first containers for techies:

  version: '3'
services:
  postgres:
    image: postgres:9.4
    container_name: postgres
    environment:
     - POSTGRES_PASSWORD=yourdbpassword
     - POSTGRES_USER=yourdbuser
     - PGDATA=/var/lib/postgresql/data/pgdata
    volumes:
     - ./postgres-data:/var/lib/postgresql/data/pgdata

Volumes are a cool thing if you only have one instance of something running. As soon as you think about running multiple instances of a container in a cluster for loadbalancing you will have to think about how you can share the storage. We tried a lot of solutions for this but nothing seemed to work good enough. In our case we wanted to do this with Odoo. In the end we removed the persistent storage and moved everything to PostgreSQL. We had to push our development for the Odoo module muk_dms and the lobject modules from the dms family. Some may ask if we only moved the persistence problem from one container to another. And they are right to do so. But the advantage of PostgreSQL is that you can run a cluster with master and worker instances. One can persist and the others just copy the data and send changes back.