Pump to Plug — Part VII: The Data Behind the Dashboard

Picture of Brad Juhasz

Brad Juhasz

Why EV Charging Data Is More Fragmented Than the Chargers

At nearly every conference booth, vendor presentation, and network demo, the same visual appears: a dashboard showing the health of a charging network. Chargers are represented as green or red dots. Color-coded maps show network ubiquity. Uptime percentages are prominently displayed. Session counts, utilization curves, and fault indicators suggest a system that is visible, measurable, and under control.

The visual language is reassuring. It implies that the system can be seen, and therefore managed.

But that implication deserves closer scrutiny.

A dashboard reflects the signals available to the system presenting it. It does not necessarily reflect the full operational state of the infrastructure it claims to monitor. Over the course of conversations at EVCS, a consistent theme emerged across manufacturers, network providers, and service organizations: the data that underpins those dashboards is more fragmented than the infrastructure itself.

What appears as a coherent operational view is, in practice, a stitched-together representation of partial information.


Charging infrastructure does not produce a single stream of data. It produces many.

  • There is firmware-level telemetry inside the charger, often rich with component-level detail. 
  • There is the OCPP communication stream, which carries status updates, in-session detail, and commands between the charger and the network. 
  • There are CSMS-level interpretations of those signals, which translate raw messages into operational states. 
  • There are manufacturer-specific diagnostic tools that access deeper layers of the system. 
  • There is field service knowledge—often tribal in nature—technicians who have seen similar failures before and recognize patterns that are not obvious in logs. 

And beyond the charger itself, there are grid conditions and vehicle behaviors that influence how the system performs.

These layers do not collapse into a single, unified system of record. Each participant in the ecosystem sees a slice of the whole. The network operator sees one version of reality. The manufacturer sees another. The technician on-site sees something different still.

The system is distributed. Visibility is not.


OCPP has been an essential part of making this ecosystem viable. Without it, multi-vendor charging networks would struggle to function at all. It provides a common language for communication between chargers and network management systems, enabling session management, status reporting, and basic control. However, it is important to understand what that language was designed to do—and what it was not.

In discussions with EVSE manufacturers at EVCS, a recurring estimate emerged: somewhere between fifty and seventy-five percent of the information required for effective triage is either not exposed through the OCPP data stream or is only available in an abstracted form. In many cases, that abstraction takes the form of alphanumeric error codes that signal a failure without explaining its cause. Even when documentation exists, interpreting those codes often requires manufacturer-specific knowledge or escalation to manufacturer engineering teams.

Across vendors, there is no consistent structure for how failures are represented. Some systems provide relatively descriptive messages. Others rely on opaque identifiers. Similar failure modes are not expressed in a common way, and there is no shared taxonomy that allows operators to translate meaning across platforms.

The result is that visibility does not translate cleanly into understanding. The system can indicate that something is wrong without making clear what is wrong or how to fix it.

OCPP enables communication. It does not guarantee diagnosis.


At the same time, it would be incorrect to assume that the underlying data does not exist.

Every EVSE manufacturer I spoke with described access to far richer diagnostic information within the charger itself. Control board–level telemetry, direct sensor access, detailed fault trees, and in some cases visibility down to individual Modbus registers are all available internally. This information is actively used by manufacturer engineering teams to diagnose and resolve issues.

Access to this layer is tightly controlled.

Manufacturers typically maintain proprietary tools and internal interfaces that allow them to interrogate the charger at a depth that is not exposed to network operators or third-party service providers. In some cases, APIs exist that could expose portions of this data externally, but those interfaces are not broadly available.

The limitation, then, is not that the system lacks data. It is that the data is not accessible where operational decisions are being made.


That boundary is not purely technical. It is also strategic.

Some manufacturers indicated a willingness to expose deeper telemetry under controlled conditions, particularly to certified service providers. In that model, expanding access is a way to build a scalable ecosystem of qualified operators who can support deployed infrastructure without requiring the manufacturer to internalize all service capacity.

Others were more cautious. Two concerns surfaced repeatedly: maintaining control over the customer relationship and maintaining control over how product performance is represented in the market. Providing third parties with deeper access to operational data introduces the possibility that reliability comparisons could emerge outside the manufacturer’s control.

These are not unreasonable concerns; however, they do shape how data flows–or does not flow–through the system.

Data access, in this context, is not simply an engineering decision. It is a business model decision.


From the perspective of network operators and service providers, the practical consequences of these boundaries are not evenly distributed.

Most failures are straightforward. CSMS providers described a consistent pattern: roughly half of all issues are communications-related and can be diagnosed directly from network data. Another portion consists of authorization failures, timeouts, or clearly expressed faults that are visible and interpretable through existing systems.

But the remaining fifteen to twenty percent of failures are different. These are the cases that resist easy classification. They are either buried in deeper layers of telemetry that are not exposed, or they span system boundaries that no single platform fully observes. A voltage sag on the grid, a thermal constraint in the vehicle, or an interaction between subsystems can trigger a failure that the charger can only partially describe.

Not all of these issues are cross-system in origin, but all of them share a common characteristic: they cannot be resolved confidently with the information available at the point of decision.

This is the portion of the system where diagnosis becomes uncertain.


In those cases, escalation to the manufacturer is not an exception. It is the norm.

Service providers indicated that nearly all of these “hard” failures require OEM involvement to reach a definitive diagnosis. And here, another constraint becomes visible: time.

From ChargerHelp’s experience, response times for manufacturer support can range from a couple of days to several weeks, depending on workload and issue complexity. That cadence does not align with the operational reality of network management, where decisions are made in minutes or hours.

The result is a structural mismatch. The system capable of providing the deepest insight is not operating on the same time scale as the decisions that depend on it.

In the meantime, the problem does not wait.


Compounding this is the structure of the ecosystem itself.

A single issue may involve a site host, an installer, a distributor, a network operator, and a manufacturer, each with partial visibility and distinct responsibilities. No single entity owns the full system, and no single entity has the authority to resolve every aspect of a failure independently.

Diagnosis becomes a coordination exercise. Resolution becomes a sequence of handoffs.

Even when each participant performs well within their domain, the system moves more slowly than any of its parts.


These dynamics concentrate in the same place.

The 15%-20% of failures that are hardest to diagnose are also the ones that drive the majority of operational effort. They require escalation, coordination, and time. They are the cases that lead to repeat truck rolls, delayed resolution, and missed expectations.

And they are precisely the cases where data is least accessible, least complete, and least consistent.


When asked what would improve if deeper telemetry were available, the answers were strikingly consistent: faster triage and reduced dependence on OEM escalation.

Not better dashboards. Not more reporting.

Faster decisions, made closer to the point of action, with less reliance on external interpretation.

The value of better data, in other words, is not informational. It is operational.


Taken together, these observations point to a broader conclusion.

The challenge facing EV charging is not a lack of data. It is fragmentation—across access, visibility, semantics, system boundaries, incentives, coordination, and time.

Each layer on its own is manageable. Together, they create a system in which no single participant sees the full picture, no single participant controls the full outcome, and the most important decisions must be made before the system capable of resolving them responds.

That is why dashboards can appear complete while operations feel uncertain.

And it raises a more fundamental question.

If no one sees the full system, and no one controls the full system, then who actually owns reliability?

That is where we turn next.

Pump to Plug — Part VII: The Data Behind the Dashboard

Why EV Charging Data Is More Fragmented Than the Chargers At nearly every conference...

Pump to Plug — Part VII: Reliability Has an Economic Governor

Why Fragmentation Caps Uptime Before Technology Does In most charging organizations, reliability improvement is...

In this article