By: Thu Nguyen
Read Time: 6 min
Containers provide a nifty solution to package up applications along with their dependencies, and for the whole encapsulated process to be run on a host system. This technology is undeniably popular due to its ability to allow developers to create flexible, scalable, reliable solutions in a quicker amount of time. It has enabled more freedom in choosing the technology we use in our applications and has brought development and production environments closer to parity.
A containerized application is managed by a container engine (with the important task of abstracting away the application and bundling it up in a self-contained manner) and runs on an environment layer that is situated between the application and the hosting server (which can be a virtual machine or bare-metal servers). A traditional application, on the other hand, is understood to have an infrastructure where applications run directly on VMs or bare-metal servers.
This paradigm shift in development necessitates a new way of monitoring our applications. Traditional monitoring tools and strategies built for physical and virtual hosting environments are generally insufficient in the world of containers. This article will explore what has changed and what has remained relatively stable in application and systems monitoring.
Monitoring from a qualitative perspective can be assessed in a high-level (and black box) manner. Items of concern would include whether a service is up and running. For example, is this HTTP server serving a 200 OK for this specific URI? It can also be assessed in a lower-level approach: Is this machine and/or this process running? Does this log file get updated regularly? From the high-level point of view, traditional and containerized apps are assessed the same way.
Monitoring through a quantitative lens means concerning yourself with things like how many resources are being used, how fast responses are appearing, etc. From a low-level perspective, monitoring becomes starkly different for containerized applications. A lot of traditional mechanisms don’t map well to containers. (For instance, we cannot assess the on/off state for a container and we often cannot ping a container.)
To compensate for this, containers often offer health check mechanisms that can be more useful than simply pinging a server to see whether it is up. Docker and Kubernetes both have health check tools ingrained in their systems which can check in and inform on whether everything is working as it should be. Leveraging the health check features of these systems with a good feedback loop means that monitoring containerized applications can reduce more risk by enabling early diagnosis and allowing problems to be caught and solved before they get out of hand.
Metrics such as response time and error rates are unchanged between containerized and traditional applications, but resource metrics are a bit different between containers and virtual machines since memory usage does not have the same meaning and can’t often be compared. Container resource utilization becomes interesting because containers have multiple isolated perimeters. This is a crucial matter because you need to consider things like the CPU utilization of a container instance and the aggregate usage for the host it is running on.
Containerized applications most likely require time-series management systems that are highly dynamic. This is not specific to containers, but becomes almost impossible to avoid when working with containers since it is normal to add and remove instances all the time, which is something rarely done in traditional application infrastructures.
To keep track of all of these moving parts and the fleeting nature of the data generated, it is vital to embed monitoring into containers right from the beginning and keep tabs on the large volume of logs and time-series metrics which, in turn, require speedy analysis. Monitoring tools of a more traditional variety tend to fall short in such dynamic environments that make use of containers.
The advent of containerized applications has brought along a slew of give-and-takes. It has facilitated the application development and deployment process, but has also posed a lot of new and unique challenges to the way that effective monitoring is conducted. More and more applications will shift to the cloud and make use of containers, and it necessitates a new way of thinking about lifecycle management and an adoption of new tools and strategies that are tailored for a dynamic climate.
Coming up with a centralized logging strategy is critical with containerized application. For Kubernetes, check this post to see the top metrics to log. LogDNA is a centralized log management platform that has the simplest integration for Kubernetes with just two kubectl lines.
By Daisy Tsang
First published on www.ibm.com on October 7, 2019. Written by: Norman Hsieh, VP of Business Development, LogDNA You know what they say: you can’t fix what you can’t...
First published as a case study on www.ibm.com on October 3, 2019. What is Log Analysis? IBM Cloud™ Log Analysis with LogDNA enables you to quickly find...
Kubernetes has fundamentally changed the way we manage our production environments. The ability to quickly bring up infrastructure on demand is a beautiful thing, but...