Docker is generating some serious interest at the moment. It promises to bring peace to the traditional grudge between developers and DevOps. A worthy cause on its own, but it also has the potential to completely change the way applications are deployed. What is all the buzz about?
"Docker is Git for deployment" is a very clean and concise way to describe the tool. Docker uses Linux Containers (LXC) to implement lightweight virtual servers. LXC containers have very low overhead, meaning you can run an almost arbitrary number of them on a single host. This is in stark contrast to traditional virtualisation systems such as VirtualBox or Xen. Lightweight also means fast, and Docker containers are indeed fast enough that one can, in most cases, forget that virtualisation is involved. The container can almost be treated like any other application on the host.
The key concept that Docker uses is to make the whole container versioned rather than just the individual parts. Chef and Puppet are examples that have already made configuration versioned and incremental, but Docker takes this one step further by being able to make quick snapshots of the entire container using a layered file system. No need to learn a DSL to configure everything!
Yes, this means all applications, all data and all configuration files are versioned and stored.
Have a nasty bug in production? Just create a production snapshot and clone the entire environment so you can debug it locally.
Did the application upgrade go less than perfectly? Just revert and try again.
Tests failing due stale test data? Roll back the state and start from a clean sheet for every test.
All of this has been possible before, but never with this little effort. You can even share your container images, and others can incrementally build and enhance them. There are already loads of really interesting use-cases for Docker out there. The community is active, and the collection of images is constantly growing. In the long run, Docker has the potential to change the way we do application development and deployment.
A few words of warning, though: Docker is very Linux-centric, so in order to use Docker on a Mac or Windows, you need to have Vagrant or some similar tool to get a suitable Linux environment. This virtual host will then run your Docker containers. Talk about a serious case of Inception. Debugging requires you to dive into the deeper end of abstraction layers, and requires an understanding of what’s going on under the hood.
At version 0.6, Docker is still not ready for production. It's getting there, but it still feels somewhat rough around the edges. Granted, I only tried a few proof-of-concept exercises, but still managed to run into some complications - especially when using copy and insert to move files between the host and the container. None of the issues were big, but combined with the somewhat lacking documentation I did feel a bit stranded at times.
The biggest problem at the moment, as far as I am concerned, is the setup of networking between containers. Maestro seems like a potential solution for this, but it is yet to be updated to support the new local socket interface in version 0.6.
One last thing that should be mentioned is that many of the default use-case examples on docker.io deal with transient data (as opposed to, say, a production database). Upgrades on servers with persistent data comes with some complexity, as one will need to merge live data into the new container. After a day of pondering, it’s still unclear to me whether Docker actually supports this level of merging at all, or does one have to dive into LXC to do so.
Docker is open source, meaning the best way to deal with issues is to fork it and start fixing them. Docker has huge potential, so it is definitely a project I will keep a very close eye on in the near future.