SecOps

Optimize data flowing to high-cost destinations and route to storage more intelligently with Mezmo Observability Pipeline

Meet Secutiy Demands

Security is your organization’s highest priority, so you can’t miss a thing. You need your SIEM to get access to every event that could be notable. In a recent ESG survey 52% reported data capture and analysis as their top observability challenges related to adopting DevSecOps.  A telemetry pipeline can ensure that your SIEM platform can access all relevant events, logs, and metrics from devices showing trouble.

Improve Security Posture

Get your security information and event manager the data it needs. By routing telemetry data from all sources and transforming that data into a format suitable for security analytics, you can make your SIEM more effective. Security-related metrics can be extracted from event data, providing insights to your security that may have been previously missed.

Benefit

01

Increase Data Utility

Mezmo Telemetry Pipeline expands the usability of your telemetry data. Pipeline gets it in the right format, routes it to every team that needs it, and integrates it with your current analytics tools. This boosts your team's ability to efficiently manage application performance, service reliability, and security, leading to increased effectiveness across the board.

Benefit

02

Accelerate Resolution Times

Improve observability by boosting the "signal to noise ratio" of your telemetry data and directing it to the appropriate team. Aligning data formats across platforms enhances collaboration and root cause identification, while data enrichment deepens problem comprehension. With a collaborative team armed with superior data, issue resolution becomes more effective.

Benefit

03

WHAT IS Mezmo TELEMETRY PIPELINE

Control Data

Control data volume and costs by identifying unstructured telemetry data, removing low-value and repetitive data, and using sampling to reduce chatter. Employ intelligent routing rules to send certain data types to low-cost storage.

  • Filter: Use the Filter Processor to drop events that may not be meaningful or to reduce the total amount of data forwarded to a subsequent processor or destination.
  • Reduce: Take multiple log input events and combine them into a single event based on specified criteria. Use Reduce to combine many events into one over a specified window of time.
  • Sample: Send only the events required to understand the data.
  • Dedupe: Reduce “chatter” in logs. The overlap of data across fields is the key to having the Dedup Processor work effectively. This processor will emit the first matching record of the set of records that are being compared.
  • Route: Intelligently route data to any observability, analytics, or visualization platform.

Transform Data

Increase your data value and optimize data flows by transforming and enriching data. Modify data as needed for compatibility with various end destinations. Enrich and augment data for better context. Scrub sensitive data, or encrypt it to maintain compliance standards.

  • Event to Metric: Create a new metric within the pipeline from existing events and log messages.
  • Parse: Various parsing options available to create multiple operations such as convert string to integers or parse timestamps.
  • Aggregate metrics: Metric data can have more data points than needed to understand the behavior of a system. Remove excess metrics to reduce storage without sacrificing value.
  • Encrypt: Use the Encrypt Processor when sending sensitive log data to storage, for example, when retaining log data containing account names and passwords.