Docker & Kubernetes Overview

Henrique Siebert Domareski
13 min readAug 2, 2021

Docker and Kubernetes are technologies that allow us to build and deliver applications in a more efficient way. Summarizing in one phrase, Docker is a containerization platform and Kubernetes is a container orchestrator for container platforms like Docker. In this article, I give an overview of Docker and Kubernetes.

Container

Let’s start talking about Containers, which is essential to better understand Docker. In the old days, always when a new application was released, it was necessary to have a new server in order to run the new app, which as you can imagine, was not so handy and not cheap. Some years later it was possible to use Virtual Machines (VM).

With VMs was possible to publish many apps using a single server, but even though, on each VM was included a full copy of an operating system, the application, necessary binaries and libraries and these taken up many GBs.

Virtual Machines. Source: https://www.docker.com/resources/what-container

Containers and virtual machines have similar resource isolation and allocation benefits, but function differently because containers virtualize the operating system instead of hardware. Containers are more portable and efficient.

Containers are a way of virtualizing, at the operating system level, and allow to run multiple isolate apps in a single real operating system. Multiple containers can run on the same machine and share the OS kernel with other containers, and each container will run as isolated processes in user space. Containers can save resources, they take up less space than VMs (container images are typically tens of MBs in size), can handle more applications and require fewer VMs and Operating systems.

Containers. Source: https://www.docker.com/resources/what-container

Containers are a lot like virtual machines, just faster and more lightweight. They can be used in the development, testing and production environment. They also work really well with Microservices and Cloud Environments. Containers allow a developer to package up an application with all of the parts it needs, such as libraries, scripts, configuration, binaries and other dependencies, and deploy it as one package.

A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state. (Docker Container)

Docker solves the classic problem “It works on my machine”, because a container will be the same in any environment, so you don’t need to worry about configuration and requirements for each environment. Docker also eliminates the need to create a development and test environment, once the app will run with Docker, it will work in the same way as in the production environment, eliminating the need to create and configure different environments. Beyond that, containers are great when you think about scalability.

Docker Platform

Docker is an open-source tool designed to make it easier to create, deploy and run applications in containers. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. Docker contains a Community Edition (CE) which is free, and also contains a paid version, the “Enterprise Edition (EE).

Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allow you to run many containers simultaneously on a given host. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host. You can easily share containers while you work, and be sure that everyone you share with gets the same container that works in the same way.

Docker provides the mechanics for starting and stopping individual containers and with Docker, we don’t need to create environments for development and tests anymore, and it’s a great alternative for Virtual Machines because it’s lightweight, faster and easy to use. Docker also solves that famous problem “in my machine works”.

ps: “ Docker Inc.” is a company, and “Docker” is the technology. They are closely linked but they are not the same.

Docker Architecture

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface. Another Docker client is Docker Compose, that lets you work with applications consisting of a set of containers.

Image source: https://docs.docker.com/get-started/overview/

Docker Daemon — The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.

Docker client — The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.

Docker Registry — A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry.

Docker Image

A Docker Image is a file used to execute code in a Docker container. You can think of an image as a kind of a snapshot in virtual machine environments, for example, could be an OS, and once you download or create this image and run it in a container, you will be able to access this environment. You can also think of Image as a template for how to build a container.

An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization. For example, you may build an image which is based on the ubuntu image, but installs the Apache web server and your application, as well as the configuration details needed to make your application run.

A Docker image contains everything it’s necessary to run a containerized application, this includes code, libraries, tools, dependencies and other files which are necessary in order to make an application run. When a user runs an image, it can become one or many instances of a container. Once the image is deployed to a Docker environment, it can be executed as a Docker container. The docker run command creates a container from a specific image.

You might create your own images or you might only use those created by others and published in a registry. To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.

Dockerfile

Dockerfile is a text document that contains all the commands and instructions to build a Docker Image. A Dockerfile can contain all the commands that a user could use in the command line to assemble an image. Docker can build images automatically by reading the instructions from a Dockerfile.

A Dockerfile specifies the operating system that will underlie the container, along with the languages, environmental variables, file locations, network ports, and other components it needs — and, of course, what the container will actually be doing once we run it.

Docker Hub

Docker Hub is a public image repository for Docker, where you can find all kinds of images (you can access the Docker Hub by clicking here). If you work with .NET, you can think of something similar to the “Nuget”, which is a package repository.

Docker Hub is the world’s largest repository of container images with an array of content sources including container community developers, open source projects and independent software vendors (ISV) building and distributing their code in containers. Users get access to free public repositories for storing and sharing images or can choose a subscription plan for private repos.

Docker App

Docker App is a way to define, package, execute, and manage distributed applications and coupled services as a single, immutable object. Making complex multi-service applications as easy to build, share and run as single containers, Docker App is the first implementation of the open Cloud-native Application Bundle (CNAB) specification and helps organizations improve collaboration across development teams, drives more transparency and consistency in application development and simplify the tracking and versioning of distributed applications.

TIP: If you prefer to use a tool instead of using docker commands by command line all the time, you can use a tool like Portainer, which is a container management tool, and works with Kubernetes, Docker, Docker Swarm and Azure ACI. It allows you to manage containers without needing to know platform-specific code. This is the website: https://www.portainer.io/.

Orchestration Systems

Container orchestration tools provide a framework for managing containers and microservices architecture at scale. Imagine that you have a containerized applications running in the cloud, now you need to deal with different “problems” as to how all these containers can be coordinated? how to update the application without interrupting the services? How do you monitor the health of the application in order to know when something is going wrong and restart it? And others. To solve problems like this, we can use some solutions for containers orchestration as Kubernetes, Docker Swarm or others.

An orchestration system serves as a dynamic, comprehensive infrastructure for a container-based application, allowing it to operate in a protected, highly organized environment, while managing its interactions with the external world.

An orchestration system can handle a large number of containers and users interacting with each other at the same time, managing and keeping track of the interactions between them, can handle authentication and security, can balance loads in an efficient way, can handle multi-platform deployment manning the coordination between containers, managing microservice availability and synchronization, and other.

Scalability

When one service needs to scale, we don’t make the container bigger, instead of that we add more containers, and when it’s necessary to scale down, we only take away some of the containers. But can happen that many services need to communicate with each other, or it’s dependent on another service, and when we are thinking about scaling up and down, can be complicated to handle all these situations, so for that, we can make use of an orchestrator, which is a system that will manage all this situation for us.

Containers are great when thinking about scalability. Image a scenario where you have an application, and each time your clients receive some email with a promotion, the number of access in your web application increases. In this case, you can use containers with some container orchestration like Kubernetes, OpenShift, Docker Swarm (which is a bit easier for inexperienced operators) or some other orchestrator, in order to provide as many containers as needed. For example, if each container can handle around 500 users doing requests, and after the newsletter was sent, you have now more than 10 thousand clients making requests, then the orchestrator can create new containers to attend to this demand, and when the number of requests decreases, the containers are being removed.

A small remark: a container will not necessarily replicate the whole application (you can do it if you want), but generally the replication is related to the necessary infrastructure in order to execute the application. The files, images, databases (the volume — Volumes are the preferred mechanism for persisting data generated by and used by Docker containers) can be shared from one storage, being accessible to the containers as they were their own volumes.

Kubernetes

Kubernetes(K8s) is an open-source orchestration system for automating the management, placement, scaling and routing of containers. You can think of “orchestration” as a kind of management or organizer tool.

Kubernetes came from Google and provides a common framework to run distributed systems so development teams have consistent, immutable infrastructure from development to production for every project. Kubernetes can manage scaling requirements, availability, failover, deployment patterns, and more. It is supported by all the major cloud providers, and it’s being very used in the industry (and works really well with Docker).

Kubernetes has many powerful and advanced capabilities, but also comes with considerable complexity. For teams that have the skills and knowledge to get the most of it, Kubernetes delivers:

  • Availability — Kubernetes clustering has a very high fault tolerance built-in, allowing for extremely large scale operations. Kubernetes provides a framework o run distributed systems resiliently. Imagine that in a production environment you need to manage containers that run the applications and ensure that there is no downtime, for example, if a container goes down, another container needs to start. Kubernetes can do that for you.
  • Auto-scaling — Kubernetes can scale up and scale down based on traffic and server load automatically. If your application is receiving too many requests, your app can be automatically scaled up, and when the number of requests decreases, your app will be automatically scaled down.
  • Self-healing — Kubernetes can restart containers that fail, replace containers, kill containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
  • Secret and configuration management — Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
  • Storage orchestration — Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.

In simple terms, Kubernetes emits commands to Docker instances telling them when to start and stop containers and how to run them. Like Docker, Kubernetes can be used on-premise and also in the cloud. If you plan to use it in the cloud, there are many services that can be used as: AWS Elastic Kubernetes Service, Azure Kubernetes Service, Google Kubernetes Engine, and others. If you want to know more about K8s, check the Kubernetes website by clicking here.

Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more.

In simple terms, Kubernetes is a cluster where you have a master and many nodes.

Master

Master is responsible for coordinating/controlling the cluster nodes, and assigning tasks to the nodes.

Nodes

Node is a virtual or a physical machine (depending on the cluster) that is added to the Kubernetes Cluster. Each node is managed by the control plane and contains the services necessary to run Pods.

Nodes are where it will live the instances of the application. You will never have a cluster with a single node, because if this node presents some problem, the application stops.

Pods

A Pod is the smallest deployable unit of computing that you can create and manage in Kubernetes. A Pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod contains one or more containers, which are relatively tightly coupled. Here are some characteristics of a Pod:

  • All the containers for a Pod run on the same node.
  • Any container running within a Pod will share the node’s network and any other containers in the same Pod.
  • Containers within a Pod can share files through volumes attached to the containers.
  • A Pod has an explicit lifecycle and will always remain on the node in which it was started.

Pods are ephemeral by nature, if a pod (or the node it executes on) dies, all the Pods on the node would stop running, but then Kubernetes can automatically recognize when a Pod is no longer available and will create a new replica of that Pod to bring the service back online (ReplicaSet).

In general, when it’s related to WEB applications, it’s common to have one pod with a single container.

Namespaces

Namespaces provide a mechanism for isolating groups of resources within a single cluster. Pods are collected into namespaces, which are used to group Pods together for a variety of purposes.

A good praticle is to use your own namespace, and avoid using default namespaces or Kubernetes namespaces, since this causes errors during automatiation.

[EXTRA] Azure Kubernetes Service (AKS)

Microsoft Azure offers “Azure Kubernetes Service”, which simplifies the deployment, management, and operations of Kubernetes, making it quick and easy to deploy and manage containerized applications without having expertise in container orchestration.

Conclusion

Docker is a great tool that helps us to develop and deliver applications faster. Kubernetes is an awesome tool that can be used to manage your containers and work with scalability. If you want to go to a cloud environment, or are working with microservices, Docker and Kubernetes can be really helpful for your project, and both can be used on-premise or in a public cloud environment. You can check the official documentation on Docker Docs and Kubernetes Documentation.

If you want to have hands-on experience, try the Docker Desktop which can be used on Windows and Mac and gives you a development Docker and Kubernetes environment. Also, try these two free web Playgrounds: Play with Docker and Play with Kubernetes.

Thanks for reading!

--

--