Containerization brings predictability and consistency across the development pipeline. A developer can package code in a container and ship the same container into production knowing it will work the same. However, for this consistent experience to happen, there are many cogs and levers working in the underlying layers of the container stack. Containers abstract away the complex internals of infrastructure and deliver a simple consistent user experience. The part of the Docker stack that’s especially important in this aspect is the orchestration layer.
What is Kubernetes?
An orchestration tool like Kubernetes takes care of the complexity of managing numerous containers by providing many smart defaults. It takes care of changes and configuration with groups of containers called pods, and groups of pods called clusters. In doing so, it lets you focus on what matters most to you – the code and data that’s housed in your Kubernetes cluster. Because of these advantages, Kubernetes has become the leading container orchestration tool today.
Kubernetes makes it easy to manage containers at scale, but it comes with a steep learning curve. This is the reason for the numerous startups offering managed Kubernetes services – Platform9, Kismatic, OpenShift, and CoreOS Tectonic to name a few. However, learning the ins-and-outs of Kubernetes is well worth the effort because of the power and control it gives you.
No matter which route you take to managing your Kubernetes cluster, one fundamental requirement to running a successful system is log analysis. Traditional app infrastructure required log data to troubleshoot performance issues, system failures, bugs, and attacks. With the emergence of modern infrastructure tools like Docker and Kubernetes, the importance of logs has only increased.
The Importance of Log Data in Kubernetes
Log data is essential to Kubernetes management. Kubernetes is a very dynamic platform with tons of changes happening all the time. As containers are started and stopped, and IP addresses and loads change, Kubernetes makes many minute changes to ensure services are available and performance is not impacted. But there is still the odd time when things break, or performance slows down. At those times, you need the detail that only log data can provide. Not both performance and security, you need log data to ensure proper compliance to laws like HIPAA and PCI DSS. Or, if there’s a data breach, you’ll want to go back in time to identify the origin of the attack and its progression across your system. For all these use cases, log data is indispensable.
There are many ways you can access and analyze Kubernetes log data ranging from simple to advanced. Let’s start with the simplest option and move up the chain.
Monitoring A Pod
Pod-level monitoring is the most rudimentary form of viewing Kubernetes logs. You use the kubectl commands to fetch log data for each pod individually. These logs are stored in the pod and when the pod dies, the logs die with them. They are useful when you’re just starting out, and have just a few pods. You can instantly check the health of pods without needing a robust logging setup for a big cluster.
Monitoring A Node
Logs collected for each node are stored in a JSON file. This file can get really large, and in this case, you can use the logrotate function to split the log data in multiple files once a day, or when the data reaches a particular size like 10MB. Node-level logs are more persistent than pod-level ones. Even if a pod is restarted, it’s previous logs are retained in a container. But if a pod is evicted from a node, its log data is deleted.
While pod-level and node-level logging are important concepts in Kubernetes, they aren’t meant to be real logging solutions. Rather, they act as a building block for the real solution – cluster-level logging.
Monitoring the Cluster
Kubernetes doesn’t provide a default logging mechanism for the entire cluster, but leaves this up to the user and third-party tools to figure out. One approach is to build on the node-level logging. This way, you can assign an agent to log every node and combine their output.
The default option is Stackdriver which uses a Fluentd agent and writes log output to a local file. However, you can also set it to send the same data to Google Cloud. From here you can use Google Cloud’s CLI to query the log data. This, however, is not the most powerful way to analyze your log data.
The ELK Stack
The most common way to implement cluster-level logging is to use a Fluentd agent to collect logs from the nodes, and pass them onto an external Elasticsearch cluster. The log data is stored and processed using Elasticsearch, and can be visualized using a tool like Kibana. The ELK stack (Elasticsearch, Logstash, Kibana) is the most popular open source logging solution today, and its components often form the base for many other modern logging solutions, including LogDNA (but that’s a topic for a whole other post). The ELK stack offers more powerful logging, and more extensibility than the Stackdriver / Google Cloud option.
One example of an organization that uses this setup for centralized logging for their Kubernetes cluster is Samsung. They use the Fluentd / ELK stack combination, but add Kafka for an added step of buffering and monitoring. Samsung has even open sourced this configuration of tools and called K2 Charts.
You can stream logs of different formats together, but this would be harder to analyze, considering the scale of Kubernetes and how complicated Kubernetes log collection can get. Instead, the preferred way is to attach a sidecar container for each type of log data. A sidecar container is dedicated to collecting logs, and is very lightweight. Every sidecar container contains a Fluentd agent for collecting and transporting logs to a destination.
Archived Log Storage
Storing logs is critical, especially for security. For example, you may find out about a breach in your system that started two years ago, and want to trace its development. In this case, you need archived log data to go back to that point in time, and see the origin of the breach, and to what extent it has impacted your system.
Kubernetes offers basic local storage for logs, but this is not what you’d want to use for a production cluster. You can either use block storage like AWS S3 or Azure Blog, or you can ask your log analysis vendor to give you extended storage on their platform. For archived data, it’s best to leverage cloud storage than on premise servers as they’re more cost efficient and can be easily accessed when needed.
Dedicated Log Analysis Platforms
The ELK stack is a common way to access and manage Kubernetes logs, but it can be quite complex with the number of tools to setup and manage. Ideally, you want your logging tool to get out of the way and let you focus on your log data and your Kubernetes cluster. In this case, it pays to go with a dedicated log management and analysis platform like LogDNA, which comes with advanced cloud logging features, and is fully managed so you don’t have to worry about availability and scaling your log infrastructure.
You can start collecting Kubernetes logs in LogDNA using just 2 simple kubectl commands:
kubectl create secret generic logdna-agent-key –from-literal=logdna-agent-key=YOUR-INGESTION-KEY-HERE
kubectl create -f https://raw.githubusercontent.com/logdna/logdna-agent/master/logdna-agent-ds.yaml
Deeply customized for Kubernetes, LogDNA automatically recognizes all metadata for your Kubernetes cluster including pods, nodes, containers, and namespaces. It lets you analyze your Kubernetes cluster in real-time, and provides powerful natural language search, filters, parsing, shortcuts, and alerts.
LogDNA even mines your data using machine learning algorithms and attempts to predict issues even before they happen. This is the holy grail of log analysis, and it wasn’t possible previously. Thanks to advances in machine learning and the cloud enabling computing at this scale, it’s now a reality.
To summarize, Kubernetes is the leading container orchestration platform available today. Yet, running a production cluster of Kubernetes takes a lot of familiarity with the system and robust tooling. When it comes to log analysis, Kubernetes offers basic log collection for pods, nodes, and clusters, but for a production cluster you want unified logging at the cluster level. The ELK stack comes closest to what a logging solution for Kubernetes should look like. However, it’s a pain to maintain and runs into issues once you hit the limits of your underlying infrastructure.
For unified log analysis for Kubernetes, you need a dedicated log analysis platform like LogDNA. It comes with advanced features like powerful search, filtering, and machine learning to help you get the most out of your log data. Being a fully managed service, you can focus on your Kubernetes cluster and leave the drudge of maintaining log infrastructure to LogDNA. As you run a production Kubernetes cluster, you need a powerful log analysis tool like LogDNA to truly enjoy the promise of Kubernetes – running containers at massive scale.
Learn more about LogDNA for Kubernetes here.