Docker

Learn How to Develop, Deploy, and Run Your Application with Docker

In the earlier days, developers deployed applications directly on physical machines with each equipped with an operating system. Applications are used to share runtime because of single userspace.

Because of the limitations of deploying applications on physical hardware and utilizing the resources of the entire host system, virtualization technology came into being, which is when the dynamics of application development started changing. Tools like Hyper-V, VMware, etc developers started being able to create virtual machines that they could use to deploy guest OS on one physical machine.

Credit: https://www.docker.com/wp-content/uploads/2021/11/container-vm-whatcontainer_2-480×383.png.webp

Virtual machines have independent virtual machines and applications deployed on the virtual machine are completely isolated from other applications running on another virtual machine on the same physical machine.

There’s an overhead of each VM having its own OS and is heavy in terms of size. Containers are taking over the territory once owned by the Virtual Machines.

Credit: https://www.docker.com/wp-content/uploads/2021/11/docker-containerized-appliction-blue-border_2.png.webp

What is a container?

According to the official definition from the docker.com website.

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

Container images become containers at runtime and in the case of Docker containers — images become containers when they run on Docker Engine. Available for both Linux and Windows-based applications, containerized software will always run the same, regardless of the infrastructure. Containers isolate software from its environment and ensure that it works uniformly despite differences for instance between development and staging.

Docker is written in the Go programming language and takes advantage of several features of the Linux kernel to deliver its functionality. Docker uses a technology called namespaces to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container. These namespaces provide a layer of isolation. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.

Docker Architecture

Docker utilizes a host’s operating system kernel feature that allows containerization. Let’s go deeper into the core architecture of Docker.

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.

Credit: https://docs.docker.com/engine/images/architecture.svg

Docker Daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.

Docker Client

The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.

Docker Desktop

Docker Desktop is an easy-to-install application for your Mac, Windows, or Linux environment that enables you to build and share containerized applications and microservices. Docker Desktop includes the Docker daemon (dockerd), the Docker client (docker), Docker Compose, Docker Content Trust, Kubernetes, and Credential Helper.

Docker Registries

A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry. When you use the docker pull or docker run commands, the required images are pulled from your configured registry. When you use the docker push command, your image is pushed to your configured registry.

Docker Image

A Docker Image is just a template used to build a running Docker Container, similar to the ISO files and Virtual Machines. Images are used to share containerized applications. Collections of images are stored in registries like DockerHub or private registries.

Docker Container

A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.

By default, a container is relatively well isolated from other containers and its host machine. You can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine. A container is defined by its image as well as any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.

Installing Docker

You can follow the instructions on https://www.docker.com/ to install Docker Desktop for your operating system.

To test that Docker is installed correctly, run the following command.

$ docker version

To get more information about your Docker Engine, you can run the following command:

$ docker info

With the “docker info” command, we can see how many running containers we have got and some server information.

Docker for developers

It’s common for developers to work on multiple applications on the same system and all those applications will have dependencies. Sometimes those dependencies can interfere with each other where one requires one version of runtime whereas another needs a different version of the runtime. So Docker can help solve these by having separate containers for these application packages.

Running your first container

Now it’s time to run our first container in Docker.

// docker container run [OPTIONS] IMAGE [COMMAND] [ARG...]
$ docker container run hello-world

Now let’s talk about what happened, Docker try to find the image hello-world with the tag latest locally, but it couldn’t so it downloaded the image from Docker Hub. It then created a container from this image and the output of the container is printed.

The hello-world image is a special image provided by Docker that you can use to verify your installation is working properly as expected. You can also see what steps Docker to generate the output message.

Pull Docker Images

If you want to pull the image and not run the container at the same time you can use the following command.

// docker image pull [OPTIONS] NAME[:TAG|@DIGEST]
$ docker image pull nginx:latest

The docker pull command works by downloading all the layers of the image from the repository or registry and creating the image locally. Once a docker image becomes downloaded to the localhost, it goes into storage in the local storage cache, allowing future pulls of the same image to be quick. You can view the list of images using the command.

$ docker image ls

Container Metadata

Using the Docker inspect command, we can fetch a container’s metadata, which will help in the debugging processes.

// docker container inspect [OPTIONS] CONTAINER [CONTAINER...]
$ docker container inspect <conainer_name>

Starting a shell

We want to execute a shell inside the container, so we will run the following command.

$ docker container run -it alpine sh

Alpine is a Linux distribution with a very small footprint which is ideal for the Docker world as the minimal the size of your distribution package, the better it would be.

After the alpine image, we mention the sh command which we want to run inside the container. You can see we have passed two option flags to run the command. The -i (interactive) option initializes the specified container in interactive mode and keeps the Standard Input open. The -t flag option allocates the pseudo-tty and attaches to the STDIN.

We can also write this in expanded form as shown below.

$ docker container run --interactive --tty alpine sh

We can run the following command to verify that we are really inside the container shell.

/# cat /etc/os-release

If we run the following command to see which kernel does container is using.

/# uname -r

You can see the container is using the host kernel, as you recall containers are just processes on your host operating system.

Running in background and foreground

When you run the container in interactive mode using -it flag and you want to detach it, you can press Ctrl+pq, which will detach the standard input, output and error streams from the running container and the container will still be running which you can verify using docker container ls command.

You can again attach local standard input, output, and error streams to a running container using attach

// docker container attach [OPTIONS] CONTAINER
$ docker container attach pedantic_mestorf

If we press CTRL-D, it will terminate the shell and since the shell is the main process running in the container, it will terminate the container as well which can be verified by running the following command

$ docker container ls

If we want to see all containers running or not, we can pass -a flag to the previous command.

$ docker container ls -a

You can also start the container in detached mode using -d flag.

$ docker container run -itd alpine sh

Start, Stop and Destroy Docker Container

Every time you run the docker container you create a brand new container but sometimes you want to reuse the container that has already been created.

We can again start the stopped container using the command:

// docker container start [OPTIONS] CONTAINER [CONTAINER...]
$ docker container start pedantic_mestorf

We can stop one or more running containers at once using the command

// docker container stop [OPTIONS] CONTAINER [CONTAINER...]
$ docker container stop pedantic_mestorf

Once we call the stop command, Docker automatically moves the container from the running state to the stopped state which works by stopping all the processes within the containers.

To stop all running containers, execute the following commands.

$ docker container stop $(docker container ls -q)

We can completely remove a container using the docker rm command. Before removing a container, you have to stop the container; alternatively, you can use the force option.

// docker container rm [OPTIONS] CONTAINER [CONTAINER...]
$ docker container rm pedantic_mestorf

You can also remove all the stopped containers at once using the prune command.

$ docker container prune

Or using the following command

$ docker container rm $(docker container ls -aq)

Cleaning up Docker Container

When we run the Docker container, we get a brand new container that will persist even when the container was stopped. Sometimes we might want to clean up containers as we stop them instead of requiring to manually clean them with the rm command.

We can pass the — rm flag to the run command, which will clean up the container as well when the container is stopped.

$ docker container run --rm hello-world

Cleaning up Docker Images

We can also clean up local Docker images using the command

$ docker image rm hello-world 

Publishing a Service

Docker containers are mostly used to run services that can be accessed outside the container, which will require publishing the port mapping between host and Docker container which is done using -p <host-port>:<container-port>

$ docker container run -p 8080:80 nginx

We can also verify ports that are mapped between the host and the running container.

Bind Mount in Container

Now we have the Nginx container running, now we want to bind mount folder content on the host which has an index.html file to the Nginx container which can display the content from the host’s folder index.html file.

$ docker container run -p 8080:80 -v /host/folder:/usr/share/nginx/html -d nginx

When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its absolute path on the host machine.

Bind mounts are very performant, but they rely on the host machine’s filesystem having a specific directory structure available. If you are developing new Docker applications, consider using named volumes instead.

Volumes in Docker

Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure and OS of the host machine, volumes are completely managed by Docker.

In addition, volumes are often a better choice than persisting data in a container’s writable layer, because a volume does not increase the size of the containers using it, and the volume’s contents exist outside the lifecycle of a given container.

Credit: https://docs.docker.com/storage/images/types-of-mounts-volume.png

If your container generates non-persistent state data, consider using a tmpfs mount to avoid storing the data anywhere permanently, and to increase the container’s performance by avoiding writing into the container’s writable layer.

Create & Manage Volumes

create volume.

$ docker volume create my-vol

List all volumes.

$ docker volume ls

Inspect the volume

$ docker volume inspect my-vol

Start a container with a volume.

$ docker run -p 80:80 -d --name c1 -v my-vol:/app nginx

Remove a volume

$ docker volume rm my-vol

To remove all unused volumes

$ docker volume prune

Container Networking

To make your container able to communicate with the outside world, whether another server or another Docker container, Docker provides different ways of configuring networking. 

There are three different network types Docker delivers out of the box.

$ docker network ls

By default, the container gets attached to the bridge network when not explicitly mentioned.

Create user-defined Network

$ docker network create mynetwork

Attach container to a network

$ docker container run -it --rm --network mynetwork --name c1 alpine sh

Once attached, this container can talk to another container that is on the same network using the container name.

$ docker container run -it --rm --network mynetwork --name c2 alpine sh

Once inside the container, we can ping container c1 like this

/# ping c1

Cleanup Network

$ docker network rm mynetwork

Docker Images from containers

There are various ways to create a docker image for later use. One way is by making changes to an existing container and committing the image. You can also create docker images from Dockerfile.

When creating a new container, a read/write layer becomes attached to the container. However, this layer becomes automatically destroyed unless saved by the user. To create a new image from the docker container’s changes, we use the commit command.

$ docker container run -d --name my-nginx nginx
// docker container commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
$ docker container commit --author "NeerajK" --message "Nginx with static website" my-nginx my-nginx-website:v1.0.0

Working with Dockerfile

Dockerfile is simply a text-based build-in tool that allows us to specify various properties about automating the process of creating Docker images.

The docker file is handled by the docker engine line through the line and performs tasks specified in the file one at a time. Docker images created by a specified Dockerfile are constant, which means they are immutable and cannot be changed.

Creating Docker Images

There is often a need to customize the existing image or create your own image with another image as a baseline.

Let’s say we can create our own image from the baseline alpine image with the bash shell installed. For this, we have to create a Dockerfile that provide instruction to docker build to create an image based on it. Below is the content of the Dockerfile. Any line starting with # is a comment in Dockerfile.

# A Dockerfile must begin with a FROM instruction
# We would use the alpine:latest as our base image to start with
FROM alpine:latest
# We will update package with RUN apk update command
RUN apk update
# We will install bash shell using RUN apk add bash command
RUN apk add bash
# The COPY instruction copies new files or directories from <src>
# and adds them to the filesystem of the container at the path
# <dest>
# COPY [--chown=<user>:<group>] <src>... <dest>
COPY ./local-file /app/
# Define Envirovment variable, to change shell prompt
ENV PS1 "\h:\w# "
# The bash command will run when the container starts
# There can only be one CMD instruction in a Dockerfile. If you list # more than one CMD then only the last CMD will take effect.
CMD bash

You can include more configurations like tag names, repositories, env, expose, volume, copy, etc.

Now we will build the Dockerfile to create our custom image.

// docker image build [OPTIONS] PATH | URL | -
$ docker image build -t myalpine:latest .

If you re-run the commands again, Docker reuses the intermediate layers from the images created in previous instances if there are no changes within the layers. A docker image contains read-only layers used to represent Dockerfile instructions. The layers in the image are stacked, with each represented as a delta of the changes from the adjacent layer.

Once you use an image and create a container from it, a new writeable layer, also called a container layer, is added on top of the existing layers. The writeable layers hold all the changes performed on the container, such as deleting files, changing permissions, creating new files, etc.

$ docker container run -it myalpine:latest

Export Docker Images

Docker allows you to use tarballs and export the images for importation to other machines.

// docker image save [OPTIONS] IMAGE [IMAGE...]
$ docker image save -o nginx.tar nginx

Executing the command creates a tarball image within the current directory, with the image being importable to another host.

Import Docker Tar Images

For Docker to use an image, it has to be stored locally. You can achieve this by pulling an image from the docker registry or by importing a docker tarball.

// docker image import [OPTIONS] file|URL|- [REPOSITORY[:TAG]]
$ docker image import nginx.tar nginx-imported:latest

Docker Registry

A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry. When you use the docker pull or docker run commands, the required images are pulled from your configured registry. When you use the docker push command, your image is pushed to your configured registry.

Calling the docker login and logout command automatically connects to a hub.docker.com server, but you can specify which server to use.

// docker login [OPTIONS] [SERVER] [flags]
$ docker login
// docker logout [SERVER] [flags]
$ docker logout

Searching Image in Docker registry

To search an image in the Docker registry, we use the following command

// docker search [OPTIONS] TERM
$ docker search --limit 10 alpine

The docker search output gives information related to the alpine images such as the names, descriptions, number of star ratings, official image or not as well as its automation status.

Push image in Docker registry

We can also share the image with others by pushing it to the Docker registry https://hub.docker.com. You would need to tag the local image with your <username>/<reponame> and then log in and push the image to the Docker registry.

// docker tag local-image:tagname <docker_id>/new-repo:tagname
// docker push <docker_id>/new-repo:tagname
$ docker tag myalpine:latest kushneeraj/myalpine:latest
$ docker push kushneeraj/myalpine:latest

Run Command inside Docker Container

Sometimes for debugging we want to run some commands inside the Docker container, which can be done using exec. Let’s first run a container so that later we can run some commands inside it.

$ docker container run --name container1 --rm -d -p 80:80 nginx:latest
$ docker container exec container1 cat /etc/nginx/nginx.conf

Work with Container Logs

Docker containers create logs on STDIN/STDOUT that are accessible without logging into the container.

// docker container logs [OPTIONS] CONTAINER
$ docker container logs container1

To add more functionality to the logs output, use the command flags such as -t that display the time stamps for the logs. Another useful flag is the -f which shows tail-like behavior.

Leave a Reply

Your email address will not be published.