Reading Time: 8 minutes

What is the deal with this whale and containers on his back?

Unsure about Docker and containers? Great, then read on…

Although technically incorrect, we can use the analogy of Virtualisation to explain Containers. Think of containers as isolated run time environments – or micro machines – which include all that is required to run an app or multiple apps. These isolated machines run on top of a physical (or virtual host) but unlike VMs, they share the host’s Kernel (operating system). This is one of the fundamental differences between Containers and VMs. In a VM world, you can have a host on top of which you can install a Windows VM, a Linux VM and a Ubuntu VM. In Containers’ world, if you have a Linux host, you cannot have Windows containers or vice versa. The benefit of this model is in speed, scale and portability. You can spin up a container in milliseconds and have tens of 1000s of containers in one rack of servers, compared to 1000s of VMs. It is possible to have containers inside a VM. This model allows you to have a single host, virtualised using VMs, on top of which you run containers of different operating systems

Key points;

  • Containers do not have their own OS and use host’s Kernel
  • Containers do have their own network interfaces (multiple options, more on that later) and filesystem
  • Containers are isolated from each other (two containers cannot see each other) and have dedicated resources (soft and hard quotas for RAM, CPU, I/O, etc)

Why Containers?

Why use containers? Mainly because of Microservices and cloud native applications. The idea behind microservices is to divide a monolith application to its fundamental core building blocks and then run each individual building block as an individual app inside a container. This has a number of advantages for developers and business, the key one being speed to value; Given that a container, contains all that is required to run the app, developers can run the container on their laptop, develop a new feature, and then test and push the same image (container) to production on-prem or in the cloud. It does however have some drawbacks, mainly security and orchestration. Imagine a monolith online shopping app (ebay?) that has been divided into 100s of components (e.g. comments, pricing, shipping, reviews, payment, use profile, etc.). Orchestrating all these microservices across multiple environments (onprem/cloud) and ensuring seamless user experience, while securing the environment is no easy task.  We will cover orchestration in a bit more detail later.

Container technologies

Docker: How it all began! Originally aimed at extending the capabilities of Linux Containers (LXC), with open-source technology based on Linux and created Docker Engine. Docker can effectively be viewed as the vessel by which container images are packaged and delivered. Docker.com has both the Community Edition (Free) and Enterprise Edition which can be downloaded / licensed. Docker has got the largest market share, has partnered with Google, AWS, Azure and Microsoft.

 

Rkt (pronounced rocket) by CoreOS:  Competitor to Docker, strength is in security (as claimed by CoreOS) and also open operability. Rkt uses an open source container format called appc, while Docker uses its own proprietary image format. It should be noted that since Docker 1.11, docker adopted the Open Container Initiative (OCI) as a standard supported by RedHat, Google, AWS, VMware, CoreOS and Microsoft.

Microsoft also partnered with Docker to create Windows Server containers and Hyper-V containers.

Windows Server Containers provide application isolation through process and namespace isolation technology. Just like Docker, these containers do not provide a hostile security boundary and should not be used to isolate untrusted code

Hyper-V Isolation runs each container in a highly optimised virtual machine.

Key Container terminology

Container image: A package with all the dependencies and information needed to create a container. Think of this as your VM image (again, not technically correct)

Dockerfile: A text file that contains instructions for how to build a Docker image.

Repository: A collection of related Docker images, labelled with a tag that indicates the image version. Can be on prem (private) or off-prem (Docker Hub)

Registry: A service that provides access to repositories. The default registry for most public images is Docker Hub (owned by Docker as an organisation). A registry usually contains repositories from multiple teams.

Compose: A command-line tool and YAML file format with metadata for defining and running multi-container applications. You define a single application based on multiple images with one or more .yml. After you have created the definitions, you can deploy the whole multi-container application with a single command (docker-compose up) that creates a container per image on the Docker host.

Cluster: A collection of Docker hosts exposed as if it were a single virtual Docker host, so that the application can scale to multiple instances of the services spread across multiple hosts within the cluster. If you use Docker Swarm (next section) for managing a cluster, you typically refer to the cluster as a swarm instead of a cluster.

  • Swarm managers: the only machines in a swarm that can execute your commands, or authorise other machines to join the swarm as workers. Swarm managers can use several strategies to run containers, such as “emptiest node” – which fills the least utilised machines with containers. Or “global”, which ensures that each machine gets exactly one instance of the specified container. You instruct the swarm manager to use these strategies in the Compose file
  • Workers are just there to provide capacity and do not have the authority to tell any other machine what it can and cannot do.

Container Orchestration

You can manage container networking, configurations, load balancing, service discovery, high availability, Docker host configuration, and more through Orchestrator CLI or GUI. The orchestrator is responsible for running, distributing, scaling, and healing workloads across a collection of nodes. Typically the Orchestrator is the same as the Docker cluster creator. The most popular orchestrators are Docker Swarm, Mesosphere DC/OS, Kubernetes, and Azure Service Fabric.

Container Security

As mentioned before, Containers (unlike VMs) share the underlying Kernel and so there is no isolation from the host OS. This means potential security threats have easier access to the entire system when compared with hypervisor-based virtualisation. One option to fix this is to run containers on top of isolated VMs. As an example, containers with more stringent security requirements can be run on one VM, while the other run on another. With this approach, if a security breach occurs at the container level, the attacker can only gain access to that VM’s host OS, not other VMs or the physical host.

Another approach is to use Microsoft’s Hyper-V Isolation which expands on the isolation provided by Windows Server Containers by running each container in a highly optimised virtual machine. In this configuration, the kernel of the container host is not shared with other containers on the same host. According to Microsoft; “These containers are designed for hostile multitenant hosting with the same security assurances of a virtual machine. “ Running a container on Windows with or without Hyper-V Isolation is a runtime decision. You may elect to create the container with Hyper-V isolation initially and later at runtime choose to run it instead as a Windows Server container.