By: Thu Nguyen
Read Time: 6 min
At LogDNA, we’re all about speed. We need to ingest, parse, index, and archive several terabytes of data per second. To reach these speeds, we need to find and implement innovative solutions for optimizing all steps of our pipeline, especially when it comes to storing data.
Our architecture makes heavy use of Kubernetes since it lets us deploy, manage, and scale quickly. However, we quickly ran into bottlenecks when attempting to store data for use in pods. To maximize efficiency and reduce costs, we needed to make use of local storage.
To overcome this problem, we developed a solution for lettings Pods store data directly on the node’s disk. This solution lets us deploy high-volume workloads without breaking a key factor of microservices.
In this post, we’ll explain how our local storage solution works and how it enables us to handle enormous volumes of data without breaking Kubernetes.
Kubernetes has a number of options for letting pods store persistent data, but Kubernetes itself was originally designed to support only network storage. This is fine for most use cases on cloud infrastructure and while network speeds have gotten significantly faster over time, there’s still an additional latency factor and a risk of bottlenecks due to network traffic. High-performance applications need a faster data storage solution, and using the node’s local disk is the best alternative.
Before the local volume type was introduced, the only way for a Pod to access local storage was by:
These options either have limited functionality for our needs, or they required us to configure the host in advance. In both cases, our ability to automate deployments was limited. The local volume type aims to resolve some of these issues, but it’s currently still in beta. We needed a solution that could be deployed as a Kubernetes workload, didn’t require node-specific configurations, and could also handle scaling.
Implementing a disk-based storage solution in Kubernetes poses several challenges.
Our first challenge was determining how to provision storage for Pods. Each Pod requires a certain amount of disk space to complete its tasks. At the same time, we didn’t want a single Pod to consume a disproportionate amount of disk space by generating an excessive amount of data. By partitioning disks and assigning each Pod to a partition, we solve both of these problems.
Our next challenge was scheduling Pods. Since Pods are directly linked to host resources, we’re unable to easily migrate workloads across nodes. Once a Pod is scheduled on a node, it’s there for good. However, this doesn’t mean we don’t have any replication. Instead of relying on Kubernetes or a network storage solution to replicate our data, we handle replication within the application itself. This is similar to how applications like Elasticsearch handle clustering and replication internally.
This leads into our final challenge, which was tracking changes to Pods and their associated partitions. More specifically, handling Pod or node failures. Since partitions are linked to specific Pod instances, we can’t easily reuse a partition in the event of a Pod failure. This is less of a problem for us since our containerized application already handles replication, but it does mean an unused partition is left on the node. If we can restart the failed Pod, then it will mount the partition and continue working as before. But if we can’t restart it, then we need a way to identify and clean up the unused partition so another Pod can take its place.
At the heart of our solution is a DaemonSet running a service that monitors and partitions disk space for incoming Pods. Deploying our solution as a DaemonSet ensures that it runs on each node in our cluster, even if we add new nodes. The DaemonSet has two responsibilities:
Instead of mounting a file or directory, the DaemonSet mounts the newly created partition to the Pod. Using disk partitions instead of directories lets us set aside a specific amount of disk space for each Pod and separate Pod storage for easier isolation and cleanup.
Local storage is a growing topic in Kubernetes, especially for applications with high I/O requirements. Although local volumes are available in Kubernetes today, they’re still in beta and don’t yet support dynamic provisioning.
We’re interested in seeing how local storage develops in Kubernetes, and how it affects the performance of disk-heavy workloads in the future. Get in touch if you’re working on or have use cases for dynamic provisioning in Kubernetes.
Happy to connect through K8s Slack if you’re working on or have use cases for dynamic provisioning in Kubernetes.
First published on www.ibm.com on October 7, 2019. Written by: Norman Hsieh, VP of Business Development, LogDNA You know what they say: you can’t fix what you can’t...
First published as a case study on www.ibm.com on October 3, 2019. What is Log Analysis? IBM Cloud™ Log Analysis with LogDNA enables you to quickly find...
Massively scaling free-text search has always been the holy grail in big data. Many software firms now face the burgeoning challenge of searching through previously...