What is Docker, and What is it Used For? Complete Docker Tutorial

What Is Docker, and Why is It So Popular?

Docker is a tool allowing developers and system administrators to build, deploy, and run applications in lightweight, portable, self-contained units called containers. Since its release in 2013, Docker’s become the most popular way of running containers, with 83% of enterprises either currently using it or planning to use it.

But what exactly is Docker, and why is it so popular? To answer that question, we first need to look at the core technology behind it: containers.

What are Containers?

Containers are virtualized environments for running software. Containers run as processes within a host operating system (OS), allowing them to share resources with the host. Containers are also somewhat isolated from other processes, letting developers run commands and even install software in containers without changing the host. Containers are also portable; a container will behave the same way on a Windows host as it does on a Linux host.

Containers vs. Virtual Machines

Containers are often compared to virtual machines (VMs), but there are some major differences. A VM emulates an entire host machine including virtual hardware and a full guest OS. Each VM requires a significant chunk of dedicated RAM, CPU time, and several GB of disk space. Since the hardware itself is virtual, VMs run much more slowly than the host OS.

Instead of virtualizing an entire host, containers only virtualize the OS. They provide their own filesystem, network address space, and even user accounts, while sharing resources such as CPU, RAM, and disk space with other processes. This makes them much more lightweight than VMs without losing the benefits of isolation.

How Docker Made Containers Mainstream

Containers have been around far longer than Docker, but Docker brought them into the public eye. Installing Docker gives you a platform for running containers (the Docker Engine), a command-line tool for creating and managing containers, and easy access to millions of pre-built containers through an online repository called the Docker Hub.

Docker also introduced Dockerfiles, which are text files containing commands that will recreate a container from scratch. Running a Dockerfile generates an image, which is used as a template for newly created containers. A well-crafted Dockerfile generates the exact same image each time, letting you recreate the same image on two different machines.

How Docker Works

Docker simplifies the way applications are deployed. Rather than creating an entire virtual operating system, applications can be shipped independently with no configuration entanglement, hence why a containerized app can run anywhere.

Docker is made up of three main components: nodes, images, and containers.

Nodes

A node is any computer running Docker Engine. Their primary uses are building images and hosting containers. When used in Swarm mode, nodes can also deploy containers to other nodes.

Images

An image is a lightweight read-only snapshot of a container in a specific state that includes everything needed to run a piece of software – the code, a runtime, libraries, environment variables, and config files.

An image is essentially a container template: you can run a container from an image, but you can’t run an image directly. Since images are read-only, you can revert an application back to a previous state simply by regenerating its container. You can also create different versions of the same image to quickly roll out updates, test changes, or scale up your application.

Orchestrators

Orchestrators are tools used to manage multiple container nodes. The official orchestrator for Docker is Docker Swarm. With Swarm, nodes are split into master nodes and worker nodes. Master nodes distribute new workloads to worker nodes based on their available capacity and current workload.

The most commonly used container orchestration platform is Kubernetes. Like Docker Swarm, Kubernetes automatically load balances containers across multiple nodes while also allowing for self-healing, automatic scaling, and fault tolerance. In addition to Docker containers, Kubernetes also supports other container runtimes including LXC and rkt.

Getting Started

To demonstrate how Docker works, let’s create a simple web server hosting some LogDNA artwork. Assuming Docker is already installed, we need to:

  1. Create a Dockerfile
  2. Generate an image using the Dockerfile
  3. Create a new container based on the new image

Let’s start by creating the Dockerfile. We’ll host the artwork using Nginx. Since Nginx provides an official Docker image, we’ll use that as the base for our new image. We’ll also need to download the artwork into the new image. The base Nginx image is based on a version of Debian Stretch, so we can use commands such as git and mv to download and move the contents of the repository to Nginx’s public HTML folder:

# Dockerfile
# Use the official Nginx container as a base
FROM nginx:1.15.5

# Install the git client
RUN apt install -y git

# Download the LogDNA artwork repository
RUN git clone https://github.com/logdna/artwork.git

# Move the repository to Nginx's public HTML folder
RUN mv artwork/ /usr/share/nginx/html

Normally a Dockerfile would have a CMD line defining the command to run when starting the container, but since the base Nginx image already includes this line, we can leave this out. Next, build the image using docker build. You can optionally use the –tag parameter to give it a name:

$ sudo docker build --tag logdna-artwork-image 

This starts the process of retrieving the base Nginx image, running the commands specified in the Dockerfile, and creating a new image with the name specified in the –tag parameter. Once the process is complete, we can verify the new image using the docker images command:

$ sudo docker images
REPOSITORYTAGIMAGE IDCREATEDSIZE
logdna-artwork-imagelatest2fb0f6ab98e83 minutes ago209MB
nginx1.15.5dbfc48660aeb2 weeks ago109MB

Now we can create a container using the docker run command. We can give the container a name using the –name parameter and map the container’s HTTP port to the host’s HTTP port using the –publish parameter:

$ sudo docker run --name logdna-artwork-container 
--publish 80:80 logdna-artwork-image:latest

Once the container is started, we can access the images from the host by opening localhost in a browser:

Logdna+python

Congratulations! You just ran your first docker container. What application will you choose to containerize next?

Learn more about Docker and container orchestration

Container technology is quickly taking over as the preferred way to run applications. If you want to learn more about Docker, you can read the Get Started Tutorial on the Docker website.

If you want to learn more about container orchestration and Kubernetes, read our introduction to Kubernetes.

Ready to get started?

Get connected with one of our technical solutions experts. We can create a custom solution to solve your logging needs.

Get Started