ObservabilityCON Day 3 recap: What's new in Loki 2.0, tracing made easy with Tempo, observability at the Financial Times, and a Minecraft NOC

Today is the last day of ObservabilityCON 2020! We hope you’ve had the chance to catch the talks so far, and will tune in live for today’s sessions. View the full schedule on the event page, and for additional information on viewing, participate in Q&As, and more, check out our quick guide to getting the most out of ObservabilityCON. If you aren’t up-to-date on the presentations so far, here’s a recap of day three of the conference.


Decision making between Jaeger and Zipkin

There’s always something that we can do to repair an old car, right? We all learn what can go wrong and signals we get from it are pretty straightforward about telling the problem (the smoke coming from the engine tells you, you should stop). On the other hand, these newer more digital cars are so complicated that you can’t understand what’s going wrong when they signal about they need to be repaired.


Announcing Grafana Tempo, a massively scalable distributed tracing system

Grafana Labs is proud to announce an easy-to-operate, high-scale, and cost-effective distributed tracing system: Tempo. Tempo is designed to be a robust trace id lookup store whose only dependency is object storage (GCS/S3). Join us in the Grafana Slack #tempo channel or the tempo-users google group to get involved today!

Leveraging Tracing in Kubernetes and Containerized Environments | Kubernetes Virtual Summit

Presented by: Ron Yishai, Director of Cloud Engineering at Epsagon. Microservices systems running on Kubernetes and containerized environments are complex and hard to monitor and troubleshoot. Join us as we discuss the growth in adoption of K8s and containers and the challenges that they have presented us all, focusing on why standard metrics by themselves are leaving gaps in your observability strategy.

Honeycomb Learn Ep. 3: See The Trace? Discover Errors, Latency & More across Distributed Systems

Distributed systems bring complexity for developer and ops teams. When incidents occur in production, expected and unexpected, you want to pinpoint which part of the service is giving problems. Distributed tracing illuminates distributed systems, making your logs easier to navigate. Quickly identify where there are errors or latency in your code or service, even within 3rd party services you use. Instrumentation is the key to the best tracing experience possible.

Going Beyond the Three Pillars of Observability

The three pillars of observability have proven themselves a crucial way to trend, tune, and troubleshoot systems large and small. These are critical day-to-day operations in millions of organizations, as attested by years of collective infrastructure management experience. But as globally-distributed infrastructures become more common, are they enough? Just to make sure we are all on the same page, the three pillars are: logging, metrics, and tracing.


AWS Distro for OpenTelemetry will send metrics and traces to Datadog

Datadog has a long-standing commitment to open standards. Our integrations with OpenMetrics, JMX, and WMI, as well as our implementation of the tried-and-true StatsD protocol, enable you to collect data with the tools and libraries that fit best into your workflows.