This is a continuation in a series where we share tidbits of our experience in scaling our log management platform. Read last week’s article about How to prevent losing log lines when using Elastic Search in production.
Organizations that handle logging at scale eventually run into the same problem: too many events are being generated, and logging components can’t keep up. Even with persistent queues and other mitigating features enabled, there’s simply not enough of a buffer between log generators and log ingesters to handle the volume of log lines coming in. Services become unresponsive, messages are dropped, and valuable data is permanently lost, which is why you end up choosing a messaging system like Apache Kafka.
What is Apache Kafka?
Apache Kafka is an open source message streaming platform commonly used as a log broker. Its popularity stems from its high throughput rate, resiliency, and low resource consumption. Even though Kafka generally works handling production log data, scaling Kafka gets expensive due to its constraints. In this post, we’ll explore Kafka’s pros and cons, why it’s commonly used as a log streaming platform, and why there’s a need for a better solution.
Kafka’s Use Cases
There are plenty of valid reasons why organizations use Kafka to broker log data. Organizations that perform logging at scale need to deliver, parse, and index millions of log messages from hundreds of nodes. Kafka is specifically designed for this kind of distributed, high volume message stream. Using Kafka ensures your logging solution can process each event without being overwhelmed and dropping messages or placing back pressure on log producers.
This is made possible through Kafka’s publish–subscribe (pub/sub) model. Log producers (publishers) send events to Kafka, which evenly distributes events into partitions. Log consumers (subscribers) are assigned to each partition and pull events on a first in, first out basis. Consumers track their current progress in the partition, which reduces the load on Kafka while allowing consumers to freely move forward or backward through the queue.
Replication is built into Kafka. Each partition can have multiple replicas that mirror the main partition as closely as possible. In the event of a failure, Kafka automatically fails over to a replica, adding another layer of safety.
Kafka’s key strength is also its greatest weakness: scalability. As the number of publishers and subscribers grows, so does the complexity and overhead of deploying and maintaining a Kafka instance.
As with any application running at scale, Kafka requires a significant amount of preparation and customization on the network, hardware, OS, and application level. Kafka’s performance and stability depends heavily on RAM capacity, disk throughput, file system tuning, and network latency. Kafka also relies on Apache ZooKeeper, which you will need to deploy separately from Kafka to avoid a single point of failure.
While it’s possible to deploy Kafka and ZooKeeper over Kubernetes, the process is fairly new and untested in the community. Kafka must be deployed as a StatefulSet in order to preserve the uniqueness of each Pod and storage volume. StatefulSets only reached general availability in Kubernetes 1.9 and introduce additional limitations impacting their scalability and persistence.
Kafka organizes messages into topics, which are further divided into partitions. Each partition is an ordered queue of messages assigned to a specific consumer. In general, more partitions leads to higher throughput at the cost of availability, latency, and memory.
However, this limits the number of consumers in any one consumer group to the number of partitions. The only way to add a consumer is by increasing the number of partitions and assigning the new consumer to the partition. Adding a member to a consumer group also triggers an automatic rebalancing of partitions across consumers.
This limitation is an operational nightmare, especially in times when there’s a bottleneck with ElasticSearch and you want to add more consumers to process the data and clear the queue and don’t want to incur the expensive cost of additional resources and the performance hit of rebalancing the partitions.
This hard limitation of having to scale up the number of partitions to increase the number of consumers is an expensive proposition that doesn’t scale with the log volumes that LogDNA faces on a daily basis. This prompted us to design our own solution that primarily allows us to spin up as many consumers as needed.
Kafka aims to deliver each message only once, but this is easier said than done. As we mentioned earlier, the consumers are the ones responsible for tracking their progress through a partition. Consumers request the next available message based on their last successfully processed message, and in the event of a delivery error, a consumer can simply re-download the failed message before moving on to the next one.
The problem is that Kafka only caches messages for a certain amount of time, or until it consumes a certain amount of disk space. If either of these limits is reached before a consumer pulls the next message, then the message will be deleted.
Alternatively, if the consumer pulls a message but fails to update its tracking state, it will download duplicate messages. This might not be an issue for smaller deployments, but consider a deployment handling ten thousand messages per second. If only 1% of those messages are duplicates, that’s still 8.64 million duplicate messages being parsed, indexed, alerted on, and stored daily.
Maintaining a large Kafka deployment requires constant monitoring and maintenance. Even the most stable and fault tolerant Kafka deployments are susceptible to sudden unexpected failures. In one event called the “Kafkapocalypse,” the team at New Relic found that their entire Kafka cluster had gone down after over a year of steady performance.
In a more recent incident, consumers were processing data far slower than producers were sending data, leading to 8-minute delays between events being created and events being processed. Resolving both situations involved creating detailed retention policies, carefully balancing throughput and replication, and using multiple tiers of alerts and escalations.
Although Kafka appears to provide a capable log brokering solution, running it at scale introduces a host of performance and stability problems.
In order to process millions of log lines per second and 20TB+ day of log volume for our customers, the engineering team at LogDNA faced every imaginable production issue with Elasticsearch.
In order to support the exponential growth in customer log volume, we realized we’d have to come up with our own innovative solutions. After running into limitations with scaling Kafka, we wrote our own persistent message broker that allows us to scale up to an infinite number of consumers and process millions of log messages per second in more efficient and resourceful ways.
If you are growing out of your ELK instance and hitting limits with Kafka, sign up for a 14-day free trial at LogDNA.