Process, Store and Analyze JSON Data with Ultimate Flexibility. Check out the blog and new demo! -->
Process, Store and Analyze JSON Data with Ultimate Flexibility. Check out the blog and new demo! -->
Start Free Trial

ChaosSearch Blog

7 MIN READ

Understanding the Three Pillars of Observability: Logs, Metrics and Traces

Many people wonder what the difference is between monitoring vs. observability. While monitoring is simply watching a system, observability means truly understanding a system’s state. DevOps teams leverage observability to debug their applications, or troubleshoot the root cause of system issues. Peak visibility is achieved by analyzing the three pillars of observability: Logs, metrics and traces. Depending on who you ask, some use MELT as the four pillars of essential telemetry data (or metrics, events, logs and traces) but we’ll stick with the three core pillars for this piece.

 

Understanding the Three Pillars of Observability - Logs, Metrics and Traces

 

Metrics

Metrics are a numerical representation of data that are measured over a certain period of time, often leveraging a time-series database. DevOps teams can use predictions and mathematical modeling on their metrics to understand what is happening within their systems — in the past, currently and in the future.

The numbers within metrics are optimized to be stored for longer periods of time, and as a result, can be easily queried. Many teams build dashboards out of their metrics to visualize what is happening with their systems, or use them to trigger real time alerts when something goes wrong.

 

Traces

Traces help DevOps teams get a picture of how applications are interacting with the resources they consume. Many teams that use microservices-based architectures rely heavily on distributed tracing to understand when failures or performance issues occur.

Software engineers sometimes set up request tracing by using instrumentation code to track and troubleshoot certain behaviors within their application’s code. In distributed software architectures like microservices-based environments, distributed tracing can follow requests through each isolated module or service.

 

Logs

Logs are perhaps the most critical and difficult to manage piece of the observability puzzle when you’re using traditional, one-size-fits-all observability tools. Logs are machine-generated data generated from the applications, cloud services, endpoint devices, and network infrastructure that make up modern enterprise IT environments.

While logs are simple to aggregate, storing and analyzing them using traditional tools like application performance monitoring (APM) can be a real challenge.

Most APM and observability tools only store log data for a set period of time (e.g. 30 days or less), since retaining them for longer within these systems comes at an outrageously high cost. However, in a distributed, cloud-based environment, most issues are interconnected across multiple system and/or event logs, and can persist for much longer than 30 days. Not to mention, retrieving these logs often relies on complex ETL or data movement processes, such as the rehydration process in Datadog.

To achieve unified observability across all three pillars, it’s important to do more than just reduce cost. You must use the right tools for data-driven monitoring and response that work for the modern cloud environment's sheer volume of data.

 

Building a Cloud-native Observability Process

Fortunately, there are a variety of cloud-native, API-first and open source tools meant to simplify the process of observability for anyone who needs access to system and event data. That might include CloudOps, SecOps or business analysts who need to understand how application and infrastructure performance impacts the customer experience.

Leveraging a best-of-breed observability architecture can help these teams maximize the strengths of each tool, while scaling and reducing costs of necessary processes like log analytics.

Some telemetry data – like metrics and traces – can be retained for shorter periods of time, and therefore be used as an effective real-time alert mechanism within tools such as Splunk, DataDog, New Relic or Dynatrace. However, using these tools for logs can drive up costs for ingestion and retention to the point where they’re unsustainable (since you’re retaining your logs for much longer for investigation purposes, or would like to do so). Not to mention, the rehydration process (or retrieving logs from archives) in Datadog can be uncontrollably costly and time-consuming.

That’s when it makes the most sense to bring in a centralized log management system that uses low-cost, cloud object storage (e.g. Amazon S3) as a data lake to retain this data for longer. This is particularly important as systems become more complex, and advanced persistent security threats linger undetected within applications and infrastructure for longer periods of time.

 

Integrating ChaosSearch with Popular Observability Tools

Log analytics software solutions like ChaosSearch work by collecting and aggregating log data, parsing and normalizing the data to prepare it for processing, and indexing the data to make it more searchable. Once log data is indexed, you can create customized queries to extract insights from your log data and drive business decision-making.

Built for scale, ChaosSearch lets you centralize large volumes of logs and analyze them via Elastic or SQL APIs — at a fraction of the cost of a solution such as Datadog. While the metrics and traces Datadog monitors are priced by host, and scale only with the number of new services, logs are priced by volume. That means costs scale more directly with usage … and they add up — especially in microservices architectures. Not to mention, the opportunity cost to manage the complex log rehydration process can also take dedicated resources away from more critical software engineering tasks.

For teams that don’t want to disrupt their metric and trace observability platforms when they’re working well, ChaosSearch can integrate easily using open APIs for unified observability. These teams can:

  • Send logs directly to Amazon S3 or Google Cloud Storage (GCS): Send log data directly from the source, or ingest it into Datadog (or another observability tool) and use S3/GCS as destination.
  • Connect to ChaosSearch: Grant ChaosSearch read-only access to the raw log buckets. From there, teams can create a new bucket for Chaos Index® to make their data fully searchable, or create a few object groups and views.
  • Analyze logs via Elastic or SQL APIs: Investigate issues with infrastructure and applications in the ChaosSearch console via Kibana (for troubleshooting), Superset (for relational analytics), Elastic or SQL API.

In the end, complementing your existing observability tools with a solution like ChaosSearch can end up saving your team time, money and frustration when it comes to troubleshooting and root cause analysis.

Want to learn more about how ChaosSearch works?

Get a Demo on Demand

 

Additional Resources

Read the Blog: Tutorial: How to Use ChaosSearch with Grafana for Observability

Listen to the Podcast: Trends and Emerging Technologies in Data Analytics

Check out the Report: Top Strategic Technology Trends for 2023: Applied Observability

About the Author, Sandro Lima

Sandro Lima is an Alliances Solutions Architect at ChaosSearch. In this role, he works closely with the hyperscalers cloud service providers and ISV partners to build joint solutions and help customers solve their main challenges around data analytics. Experienced in a wide range of IT technologies, he has a particular focus to cloud computing and data analytics. Whenever away from the keyboard, Sandro is having fun with the family or training for triathlon races. More posts by Sandro Lima