Docker Architecture

Docker is an Open Source platform for developing, deploying and running applications using container-based virtualization technology.

Docker product/tools

Docker Engine is the one that manages the whole “barrack”. Docker Hub is a repository where Docker images can be stored. Docker Swarm is a container clustering technology, to group together containers that are running on different physical machines, potentially located in different parts of the globe as if they were on the same physical machine. Kitematic is a Docker client, in practice a graphical interface to create, manage and destroy containers, unlike the command line interface that we will see. (The others will be described shortly)

Linux Kernel and Docker with its Architecture

Docker Engine (daemon) is the program that enables containers to be built, shipped and run. Docker Engine uses Linux Kernel namespaces and control groups (cgroups). Namespaces give us the isolated workspace. Cgroups limit, account for, and isolate the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes.

Let’s see what the Docker Engine is also called Deamon Docker: it’s the main process. It’s the program that allows you to build, distribute and run containers. Container-based technology, and Docker in particular, uses the host operating system kernel to do what it needs to do. In particular, Docker uses two features made available by Kernel Linux which are Namespace and Control Groups (CGroups). Namespaces provide an isolated working space for each container, CGoruplimits the actions and resources that can be used within each container. Basically, if we put ourselves from the container point of view, a Namespace tells us what they see the processes that run in a container, the CGroups tell us what processes can do about those things that they see and are specified by the Namespaces. In other words, the CGroups tell us what we can do and the Namespaces tell us what we see and then what we can do.

Docker Client/Server and Deamon

See details of Docker’s architecture. The architecture is client/server type, the role of the server is played by the Docker Engine (the docker deamon i.e. the process that manages all the creation, destruction, execution relative to containers). The docker client that makes the client and what is invoked in the command line (for example, we use the command line). We note that, when we write docker version, we are instructing the Docker client to pass the version command to the Docker Daemon to have us return information. So in general, the client takes an input from the user and sends it to the Docker daemon. It is the daemon docker who acts, i. e. he runs and distributes the containers. Clients and Demon Docker can or cannot reside on the same physical machine. We can control a remote Docker daemon, so we can have a Docker client on a machine and send commands to a remote Docker located on a server on which we want to run containers.

The Docker Engine

First, let us look take a look at Docker Engine and its components so we have a basic idea of how the system works. Docker Engine allows you to develop, assemble, ship, and run applications using the following components:

  1. Docker Daemon: A persistent background process that manages Docker images, containers, networks, and storage volumes. The Docker daemon constantly listens for Docker API requests and processes them.

  2. Docker Engine REST API: An API used by applications to interact with the Docker daemon; it can be accessed by an HTTP client.

  3. Docker CLI: A command line interface client for interacting with the Docker daemon. It greatly simplifies how you manage container instances and is one of the key reasons why developers love using Docker.

Docker Engine

Now that we see how the different components of the Docker Engine are used, let us dive a little deeper into the architecture.Implementation

Docker is available for implementation across a wide range of platforms:

  • Desktop: Mac OS, Windows 10.

  • Server: Various Linux distributions and Windows Server 2016.

  • Cloud: Amazon Web Services, Google Compute Platform, Microsoft Azure, IBM Cloud, and more.

Docker Architecture

The Docker architecture uses a client-server model and comprises of the Docker Client, Docker Host, Network and Storage components, and the Docker Registry/Hub. Let’s look at each of these in some detail.

Docker Architecture

Docker Client

The Docker client enables users to interact with Docker. The Docker client can reside on the same host as the daemon or connect to a daemon on a remote host. A docker client can communicate with more than one daemon. The Docker client provides a command line interface (CLI) that allows you to issue build, run, and stop application commands to a Docker daemon.

The main purpose of the Docker Client is to provide a means to direct the pull of images from a registry and to have it run on a Docker host. Common commands issued by a client are:

docker build
docker pull
docker run

DockerHost

The Docker host provides a complete environment to execute and run applications. It comprises of the Docker daemon, Images, Containers, Networks, and Storage. As previously mentioned, the daemon is responsible for all container-related actions and receives commands via the CLI or the REST API. It can also communicate with other daemons to manage its services. The Docker daemon pulls and builds container images as requested by the client. Once it pulls a requested image, it builds a working model for the container by utilizing a set of instructions known as a build file. The build file can also include instructions for the daemon to pre-load other components prior to running the container, or instructions to be sent to the local command line once the container is built.

Docker Objects

Various objects are used in the assembling of your application. The main requisite Docker objects are:

Images

Images are a read-only binary template used to build containers. Images also contain metadata that describe the container's capabilities and needs. Images are used to store and ship applications. An image can be used on its own to build a container or customized to add additional elements to extend the current configuration. Container images can be shared across teams within an enterprise using a private container registry, or shared with the world using a public registry like Docker Hub. Images are a core part of the Docker experience as they enable collaboration between developers in a way that was not possible before.

Containers

Containers are encapsulated environments in which you run applications. The container is defined by the image and any additional configuration options provided on starting the container, including and not limited to the network connections and storage options. Containers only have access to resources that are defined in the image, unless additional access is defined when building the image into a container. You can also create a new image based on the current state of a container. Since containers are much smaller than VMs, they can be spun up in a matter of seconds, and result in much better server density.

Networking

Docker implements networking in an application-driven manner and provides various options while maintaining enough abstraction for application developers. There are basically two types of networks available - the default Docker network and user-defined networks. By default, you get three different networks on the installation of Docker - none, bridge, and host. The none and host networks are part of the network stack in Docker. The bridge network automatically creates a gateway and IP subnet and all containers that belong to this network can talk to each other via IP addressing. This network is not commonly used as it does not scale well and has constraints in terms of network usability and service discovery.

The other type of networks is user-defined networks. Administrators can configure multiple user-defined networks. There are three types:

  • Bridge network: Similar to the default bridge network, a user-defined Bridge network differs in that there is no need for port forwarding for containers within the network to communicate with each other. The other difference is that it has full support for automatic network discovery.

  • Overlay network: An Overlay network is used when you need containers on separate hosts to be able to communicate with each other, as in the case of a distributed network. However, a caveat is that swarm mode must be enabled for a cluster of Docker engines, known as a swarm, to be able to join the same group.

  • Macvlan network: When using Bridge and Overlay networks a bridge resides between the container and the host. A Macvlan network removes this bridge, providing the benefit of exposing container resources to external networks without dealing with port forwarding. This is realized by using MAC addresses instead of IP addresses.

Storage

You can store data within the writable layer of a container but it requires a storage driver. Being non-persistent, it perishes whenever the container is not running. Moreover, it is not easy to transfer this data. In terms of persistent storage, Docker offers four options:

  • Data Volumes: Data Volumes provide the ability to create persistent storage, with the ability to rename volumes, list volumes, and also list the container that is associated with the volume. Data Volumes sit on the host file system, outside the containers copy on write mechanism and are fairly efficient.

  • Data Volume Container: A Data Volume Container is an alternative approach wherein a dedicated container hosts a volume and to mount that volume to other containers. In this case, the volume container is independent of the application container and therefore can be shared across more than one container.

  • Directory Mounts: Another option is to mount a host’s local directory into a container. In the previously mentioned cases, the volumes would have to be within the Docker volumes folder, whereas when it comes to Directory Mounts any directory on the Host machine can be used as a source for the volume.

  • Storage Plugins: Storage Plugins provide the ability to connect to external storage platforms. These plugins map storage from the host to an external source like a storage array or an appliance. A list of storage plugins can be found on Docker’s Plugin page.

Last updated