Dynamic Observability with OpenTelemetry and Elastic: The Case of Invisible Components

Dynamic receivers like receiver_creator in OpenTelemetry often go missing in health checks and zPages, causing blind spots in Elastic Observability. Learn why this happens, how it affects logs and metrics, and what the proposed fix means for your observability pipeline.

1. Introduction

"If my OpenTelemetry collector is dynamically launching receivers, why aren’t they showing up in my healthcheck or zPages view?"

If you've asked this, you're not alone—and if you're evaluating Elastic Observability while working with dynamic receivers like receiver_creator, you may have already run into this exact issue.

In this post, we’ll cover:

  • How receiver_creator works in Otel pipelines
  • Why dynamically spawned components go invisible in health checks
  • What breaks when status reporting is missing
  • A proposed fix to make dynamic pipelines visible and reliable
  • What this means for Elastic APM, Metrics, and Logs users

Whether you’re an Elastic customer evaluating OpenTelemetry or an Otel-native team trying to integrate Elastic into your pipeline, this post will walk you through a real-world edge case—and why it matters.

2. What Is receiver_creator?

The receiver_creator is a dynamic receiver in OpenTelemetry. It listens for infrastructure changes (via observers like docker_observer, k8s_observer, etc.) and spins up receivers on the fly based on matching rules.

Imagine you want to:

  • Start a redis receiver every time a new Redis container spins up
  • Start a hostmetrics receiver for any new EC2 node detected
  • Collect logs from ephemeral apps without hardcoding static receivers

That’s where receiver_creator shines.

Example config:

receivers:

  receiver_creator:

    watch_observers: [docker_observer]

    receivers:

      redis/on_container:

        rule: type == "container" && port == 6379

        config:

          endpoint: "localhost:6379"

          collection_interval: 10s

Sounds powerful, right?

It is—but there’s a catch.

3. The Visibility Problem

Let’s say your receiver_creator detects a Redis container and spins up a redis receiver. Everything works... except you check:

  • /health/status from the healthcheck extension
  • /debug/pipelinez from the zpages extension
  • Kibana dashboards or internal monitoring tools

…and the dynamically started redis receiver is nowhere to be found.

Here’s what you do see:

components:

  pipeline:metrics:

    healthy: true

    components:

      exporter:debug:

        status: "StatusOK"

      receiver:receiver_creator:

        status: "StatusOK"

But where’s redis?

This is not just cosmetic. When components don’t report status, it can cause:

  • False positives in healthchecks
  • Blind spots in metrics or logs
  • Misleading dashboards in Elastic
  • Support difficulties ("is Redis receiver even running?!")

4. Why It Happens

This behavior isn’t a bug—it’s a side effect of how receiver_creator works.

Normally, components in the OpenTelemetry Collector are registered during startup. They expose their status via the ReportStatus interface and show up in extensions like healthcheck and zpages.

But receiver_creator dynamically spawns receivers manually using:

factory.CreateDefaultConfig()

factory.CreateReceiver()

These components bypass internal service registration, so they don’t integrate into the collector’s telemetry lifecycle. The result?

They’re invisible to anything trying to introspect the system.

The Workaround Isn’t Enough

Could you write custom dashboards? Sure.
Could you query internal metrics? Maybe.
But none of that scales.

Especially if you're using Elastic Observability, where you rely on:

  • Dashboards and alerts based on component health
  • Service maps and distributed traces from multiple dynamic apps
  • Built-in machine learning to spot anomalies or degradation

When data pipelines become invisible, your visibility breaks.

The Proposed Fix: Dynamic Component Status Reporting

The GitHub issue from March 2025 proposes a clear enhancement:

Add support for component status reporting for any dynamically spawned component—including those created by receiver_creator.

This could be done by:

  • Extending the service core package
  • Automatically registering new components for lifecycle management
  • Skipping manual GetFactory() calls and using a more unified service approach

This would allow:

  • Healthcheck endpoints to reflect all active receivers
  • zpages to accurately list dynamic pipeline components
  • Observability platforms like Elastic to ingest complete pipeline state

Sample Config Used (With Redis on Container)

Here’s the full example used to reproduce the issue:

Everything works—except introspection.

5. What This Means for Elastic Users

If you're running Elastic Observability and using OpenTelemetry for ingestion, this is more than a collector edge case—it’s a visibility risk.

Elastic users depend on:

  • Fleet and Elastic Agent for integrated config views
  • Kibana observability dashboards for pipeline status
  • Ingest pipelines that need to track component health

When receiver_creator hides receivers, Elastic may:

  • Miss a drop in Redis metrics
  • Show incomplete service topologies
  • Fail to detect data pipeline failures in dynamic environments

6. Best Practices Until It’s Fixed

While the issue is pending enhancement, here’s what you can do:

  1. Use static receivers in staging if possible, to validate health
  2. Add custom metrics in each dynamically spawned receiver
  3. Use debug exporters to confirm output
  4. Add Elastic alerts on pipeline gaps or sudden drops in logs/metrics
  5. Watch this GitHub issue for progress on dynamic status reporting

7. The Bigger Picture: Dynamic Pipelines Need Dynamic Observability

Whether you’re monitoring a Kubernetes cluster or orchestrating ephemeral containers, your telemetry stack has to evolve.

If your architecture is dynamic, your observability layer must be equally dynamic—and visibility must extend beyond startup-time configuration.

That’s the difference between observability and monitoring.

Elastic is moving fast to support these new patterns and the OpenTelemetry community is listening.

8. Want to Try This with Elastic?

At Hyperflex, we help teams integrate OpenTelemetry with Elastic Observability—using best practices for logs, metrics, and traces. If you're hitting issues with dynamic pipelines, sidecar patterns, or receiver visibility, let’s talk.

Reach out at marketing@hyperflex.co

We’ll help you debug smarter, faster, and more confidently with full-stack visibility.