How to Log in Docker – Docker and Container Logging The Right Way

December 3, 2018

Intro to Docker Logging

Logging containers means rethinking the way you log your applications. Unlike traditional application logging, container logging needs to account for the isolated and ephemeral nature of containers. In order to effectively monitor Docker containers, you need a logging strategy that lets DevOps teams log potentially thousand of container instances, collect vital metadata such as the container’s name and host name, and forward logs to a centralized service.

If you’re new to Docker, you can learn more by reading our complete guide to Docker. In short, Docker is a platform for packaging and running applications as self-contained units called containers. In this post, we’ll explain how Docker handles logs and how DevOps teams can best adapt their logging strategy.

How Docker Handles Logs

Before digging into the details, it’s important to understand how Docker logs are generated and stored. There are two types of logs Docker handles:

  1. Daemon logs: these are generated by the Docker service and its components
  2. Container logs: these are generated by containers

Daemon logs are written to either the system’s logging service or to a log file, depending on the host operating system (OS). Container logs, on the other hand, are collected directly from containers and passed on to a logging driver. Logging drivers are configurable services that format and write each event to a destination such as a file or syslog server. Any output that a container writes to STDOUT or STDERR is automatically handled by the configured logging driver.

By default, Docker comes with a logging driver that writes events to a JSON file. You can access events by opening the file directly, or by using the docker logs command. Docker also appends the time of the event and the stream that it was received on to each event.

Configuring the Logging Driver

Docker supports a number of additional logging drivers including syslog, journald, and fluentd. Many of these drivers append additional information such as the container name, image name, and hostname. You can configure the current logging driver by editing the Docker configuration file and changing the log-driver parameter.

Alternatively, you can also specify a logging driver on a per-container basis. When starting a container, use the –log-driver and –log-opt parameters to specify which driver to use. For example, you can create an Nginx container that sends logs to LogDNA instead of the local syslog server using the following command (replace <LogDNA syslog port> with your provisioned syslog port number):

$ sudo docker run --log-driver syslog --log-opt syslog-address=udp://<LogDNA syslog port> nginx

While this is a good way to test a logging driver configuration, we recommend using the global syslog logging driver and installing the LogDNA agent to collect your logs. This lets you track all of your container logs without additional configuration, while also preserving important metadata such as the container and image name.

What Not to Do When Logging Containers

Container logging is very different from traditional application logging. Watch out for these antipatterns when implementing your container logging strategy.

1. Don’t Handle Logging Entirely in Your Application

Application logging frameworks like log4net and Bunyan are great for traditional applications since they handle the entire log pipeline. In a container, however, using a framework to deliver logs bypasses Docker’s logging driver. This makes it harder to:

  • Collect and centralize container logs through Docker
  • Reconfigure your log pipeline simply by changing Docker’s configuration
  • Track important metadata including container and image names

In addition, instead of having a single stream of logs from the Docker daemon, you now have dozens, hundreds, or even thousands of log streams from each of your containers. Not only is this harder to scale, but it’s also harder to maintain.

2. Don’t Use Sidecar Containers

Sidecar containers are often used to handle logging for other containers. They provide a logging agent that collects logs from other containers, then forwards them to an outside destination. This lets you keep your core application relatively untouched, but have greater control over your log stream than you would have using a logging driver.

With this approach, each application container has its own sidecar container. This adds a significant amount of operational and configuration overhead, while adding complexity to deployments.

3. Don’t Keep Docker Logs on the Host

Although storing logs on a host machine is much safer than storing logs in a container, there’s still a risk of data loss. Hosts can fail, log files can become corrupt, and if Docker’s log buffer becomes full, messages may be overwritten. To prevent this, consider sending your logs to a centralization service for long-term storage and archiving. Using solutions such as LogDNA, you can also search, monitor, graph, and live tail your log data from a single unified interface.


Having a good Docker logging strategy will greatly improve your chances of a successful container deployment. LogDNA provides the best log management tools on the market needed to collect, centralize, and monitor Docker logs in a matter of minutes. Sign up for a free trial account and check out our Docker logging guide for more information.

Ready to get started?

Connect with our technical solutions experts or get a custom solution for your exact logging needs

Get Started