Elastic Observability in Action: How a French Distributor Replaced Dynatrace and Built a GenAI Future

How a major French distributor replaced Dynatrace with Elastic Observability, rebuilt APM and APDEX KPIs, and expanded into GenAI with RAG, hybrid search, and AI observability—guided by Elasticsearch consulting best practices.

1. Why this migration matter

A large building-distribution business in France runs hundreds of applications: ERP, stock, purchasing, store operations, mediation tools, VOIP, HR, and more.
One SAP-centric application alone serves tens of thousands of users across agencies and stores, and any slowdown is felt immediately at the checkout and in the warehouse.

For years, application performance monitoring (APM) was handled by Dynatrace on a subset of servers. It worked… but it was expensive, fragmented, and increasingly hard to justify as Elastic adoption grew elsewhere in the company.

The IT leadership decided to flip the model:

  • Make Elastic Observability the primary platform for logs, metrics, traces, and user experience.
  • Replace Dynatrace without losing the business KPIs the teams relied on (especially APDEX and user “felt performance”).
  • Use GenAI and Elastic Search AI to build new services on top of the same data platform (RAG, chatbots, image de-duplication, etc.).

Hyperflex-style Elasticsearch Consulting Services were brought in to help design the roadmap, structure the migration, and de-risk each step.

2. The starting point: Dynatrace, silos, and rising costs

Before Elastic became the backbone of observability, the situation looked like this:

  • Dynatrace monitored ~200 out of ~700 servers, focusing on the most critical SAP and ERP components.
  • A separate “group” initiative was pushing Datadog as a potential standard, creating political pressure and comparisons.(Gartner)
  • Logs were partially centralized in Elastic, but metrics, traces, and business KPIs were scattered across tools.
  • The main business platform (an SAP-based ERP for stores and agencies) depended heavily on APDEX-style user-experience scoring to report service health to executives.

Renewal discussions made one thing clear:
continuing with multiple observability tools would be too complex and too expensive.

The question shifted from “Which APM is nicer?” to “Which platform can unify everything, including our future GenAI ambitions?”

3. Choosing Elastic as the strategic observability platform

The client was already using Elastic for:

  • Centralized logging for several platforms (including a “building platform” with on-prem Elastic for search + logs)
  • Search use cases that were starting to look strategic, not just operational

Elastic offered a few advantages:

  • A unified data plane for logs, metrics, traces, and search
  • A pricing model based on Elastic Cloud Units (ECUs) that could be reallocated to new use cases like APM, Synthetic Monitoring, and GenAI
  • Native capabilities like Elastic APM, SLOs, ML/AIOps, and hybrid search for RAG and vector search use cases later on(Elastic)

Group-level, Datadog eventually won a central RFP. But at the BU level, this customer deliberately chose to bet on Elastic as its observability and search/AI platform, with clear executive sponsorship (CIO and CTO) and local champions.

That’s the reality many enterprises live in: one group strategy, and then local platforms that actually get things done.

4. Designing a real “success plan” instead of a one-off project

Instead of “install Elastic APM and hope for the best”, the teams defined a multi-year success plan with clear milestones:

  • Scope definition
    • Cover ~30 business-critical applications with Elastic Observability
    • Migrate off Dynatrace by a fixed date (summer of the following year)
    • Eventually reach 500–600 servers monitored with Elastic, while decommissioning legacy platforms
  • Enablement and training
    • Private Elastic trainings for cross-functional teams (observability, SAP, mediation, HR, innovation)
    • Hands-on Kibana workshops so each team could own its dashboards and SLOs
    • Regular “Elastic Day” on-site event with customer speakers, live demos, and technical workshops
  • Consulting sprints
    • Short, focused engagements for APM rollout, Kubernetes design, SLOs, and later GenAI/RAG
    • Iterative tuning rather than a massive, one-shot delivery

This is exactly the type of engagement Elasticsearch Consulting Services excel at: not just “deploy the agents”, but align platform, people, and roadmap so adoption actually sticks.

5. Replacing Dynatrace with Elastic APM (and rebuilding APDEX)

The hardest part of any APM migration isn’t agents or dashboards.
It’s translating business KPIs so leaders still trust the numbers after the switch.

The flagship SAP-based application used Dynatrace to:

  • Trace end-to-end transactions
  • Score each user journey as satisfactory / tolerable / unsatisfactory
  • Roll this up into an APDEX-like index that the entire IT organization understood

The migration to Elastic looked roughly like this:

  1. Instrument key flows with Elastic APM
    • Use Elastic Agents and language-specific APM agents for SAP-related services and mediations
    • Capture spans and transactions that match existing Dynatrace “satellites”
  2. Recreate user journeys and APDEX-style KPIs
    • Model key paths (e.g., search stock, create order, validate invoice) as Elastic APM transactions
    • Convert latency + error conditions into “satisfied / tolerable / frustrated” categories
    • Build Kibana dashboards that expose a single health score per application and per user journey
  3. Parallel run and calibration
    • Run Dynatrace and Elastic in parallel for a period
    • Compare APDEX curves and incident detection side by side
    • Adjust aggregations and thresholds until the business recognized the same story in Elastic

By the time the Dynatrace contract reached its end, Elastic APM was fully in production on the main platforms, and the teams were already using it to monitor new apps that never touched Dynatrace.

What about Kubernetes?

As the client modernized, APM had to follow workloads onto Kubernetes.

  • A first wave of consulting support focused on getting Elastic on K8s “good enough”.
  • Feedback from the customer was brutally honest: Elastic expertise was strong, Kubernetes expertise needed to be deeper.
  • A second wave with the right combination of skills fixed the architecture and gave the client reliable guidance for production clusters.

Lesson learned: for complex rollouts, you don’t just need “Elastic experts”. You need Elastic + platform (K8s, SAP, Azure…) expertise in the same room.

6. From observability to GenAI: RAG chatbots and hybrid search

In parallel with observability, the innovation team built a RAG-based “ChatDoc” chatbot on Elastic:

  • Use case: help support teams navigate large volumes of internal documentation about the ERP and its ecosystem.
  • Architecture:
    • Documents indexed in Elasticsearch
    • Hybrid search combining BM25 (keyword search) and vector search to handle both precise terms and natural language questions(Elastic)
    • Iterative tuning on mappings, analyzers (French language), synonyms, and domain-specific keywords (“internal product names”, internal acronyms, etc.)

This started as a “sandbox” project led almost entirely by the customer’s innovation lead, who experimented quickly but needed:

  • A solid understanding of Elastic hybrid search patterns
  • Help diagnosing relevance failures when internal jargon broke naive semantic search
  • Guidance on scaling from prototype to something robust enough for production

Consulting workshops focused on:

  • Proper index design (French analyzers, custom tokenization, tags)
  • Query patterns (RRF/linear retrievers, BM25 + vector scoring)(Elasticsearch Labs)
  • Good practices for chunking, A/B testing, and feedback loops

The result: a production chatbot used daily by support teams, with high user satisfaction and a clear roadmap to extend the same pattern to other business domains.

7. AI observability: watching the LLM that watches your apps

Once you put GenAI in front of users, a new question appears:

Who is watching the chatbot?

The customer wanted AI observability for their LLM workloads:

  • Track latency, error rates, and usage per team
  • Watch for cost anomalies (token usage spikes)
  • Get visibility into prompts and responses to detect hallucinations and risky patterns

Elastic has been investing heavily here with integrations like Azure OpenAI monitoring, which allows you to ingest Azure OpenAI logs and metrics into Elastic Observability, then build dashboards, alerts, and SLOs around LLM performance and cost.

This fits perfectly into their strategy:

  • Same Elastic platform
  • Same observability stack
  • New layer of LLM observability and AI safety on top

At this point, Elastic isn’t just replacing Dynatrace.
It’s becoming the operational nervous system for both classic applications and GenAI services.

8. Results after two years

Summarizing a long story into a few bullet points, the customer achieved:

  • Full Dynatrace replacement on their key domains, with APDEX-style KPIs rebuilt in Elastic and recognized by the business
  • Hundreds of servers monitored via Elastic Observability (APM, logs, metrics, functional monitoring) across on-prem and Kubernetes
  • A growing catalog of internal champions (architects, innovation leads, app owners) trained on Elastic and driving adoption from the inside
  • A production RAG chatbot on Elastic with strong user satisfaction and an expansion roadmap to multiple business areas
  • Regular Elastic Days, workshops, and trainings that keep executives, architects, and engineers aligned on the platform’s value
  • A clear path toward LLM/AI observability and GenAI governance using Elastic’s evolving capabilities

Commercially, the engagement translated into:

  • Larger Elastic commitments (more ECUs, more credits for consulting & training)
  • A stable, predictable roadmap rather than reactive “firefighting” purchases
  • A partnership mindset between the customer, Elastic, and the consulting partner instead of a simple vendor relationship

9. What this means for your Elastic journey

If you’re still running Dynatrace (or any other APM tool) on a shrinking subset of your estate while Elastic quietly powers your logs and search, you’re in the same position this customer was two years ago.

Their experience suggests a few concrete moves:

  1. Choose a single strategic platform
    Decide whether Elastic Observability will be “just logs” or the platform for logs, metrics, traces, and GenAI-driven search. Half-measures are where cost and complexity explode.
  2. Rebuild business KPIs first, not last
    Migrating APDEX-style KPIs early ensures executives stay on board. If they trust the dashboards, everything else is easier.
  3. Invest in enablement, not just licenses
    Private trainings, on-site Elastic Days, and focused workshops created genuine internal ownership. That’s where Elastic really outpaced its competitors.
  4. Think beyond observability: Search + GenAI + AI observability
    The same Elasticsearch cluster that holds your logs can power RAG chatbots, semantic search, and AI telemetry. That’s where the long-term ROI lives.

Hyperflex and similar Elasticsearch Consulting Services specialize in exactly this kind of journey:

  • Designing the success plan
  • Executing the Dynatrace (or other APM) migration safely
  • Building GenAI and hybrid search use cases that actually get used
  • Adding AI observability so you can trust your LLMs in production

Hyperflex helps teams scale Elastic fast—with confidence.
If you’re planning your own Dynatrace-to-Elastic migration or want to bring GenAI onto your Elastic stack, contact us to explore how we can support your Elastic journey.