Pump to Plug — Part IX: Who Owns Reliability?

Picture of Brad Juhasz

Brad Juhasz

The Accountability Gap Nobody Wants to Talk About

The previous article in this series ended with a question: if no one sees the full system, and no one controls the full system, then who actually owns reliability?

The answer, stated plainly, is that no single entity does. And that is not a gap that will close on its own.

This is not an accusation. The accountability structure of EV charging did not emerge from negligence or bad faith. It emerged from the sequence in which the ecosystem assembled itself—hardware first, installation practices second, network software third, operational service layers last. Each layer was built by different actors with different incentives, different data access rights, and different contractual relationships to the infrastructure they touch. The result is a system in which accountability for reliability outcomes is distributed across parties in ways that no single party fully controls, and that no contract fully resolves.

Distributed accountability is not the same as shared accountability. In a well-designed shared accountability model, roles are explicit, handoffs are defined, and a named party is ultimately responsible for the outcome. In a distributed model without that architecture, accountability becomes diffuse—present everywhere in principle, enforceable nowhere in practice.

That is where EV charging sits today.


How the Ecosystem Assembled—And Why Accountability Got Distributed

A functioning DC fast charging site involves, at minimum, four distinct parties: the site host who owns the property and holds the utility relationship; the EVSE manufacturer who designed and warranted the hardware; the network operator who manages the software layer, session handling, and customer-facing systems; and the field service provider who deploys the infrastructure and executes physical intervention when remote resolution fails.

Each of these parties has a legitimate and necessary role. None of them were designed to own reliability end-to-end. The site host’s obligation is typically defined by a hosting agreement that specifies space and power, not uptime. The manufacturer’s obligation is bound by warranty terms that expire. The network operator’s obligation is shaped by a service agreement that may or may not include enforceable uptime commitments. The field service provider’s obligation is scoped to the work order issued, not to the reliability of the broader system.

Individually, each relationship is coherent. Collectively, they do not add up to a system with a clear owner of outcomes. When a station fails, it is entirely possible for each party to have fulfilled its individual obligation while the failure persists—because no contract assigns responsibility for the space between obligations.


The Petroleum Contrast: What Concentrated Accountability Looks Like

Although it is not a direct analog, I regularly harken back to the gas station as the model of what has worked pretty well for the better part of a hundred years.  The gasoline fueling industry did not arrive at operational reliability through goodwill or voluntary coordination. It arrived there through structure—specifically, through a set of institutional arrangements that made accountability impossible to diffuse.

Petroleum equipment technicians operate through manufacturer-authorized service networks — programs such as Gilbarco Veeder-Root’s Authorized Service Contractor designation — in which the right to perform warranted service is explicitly credentialed and gated. Regulatory frameworks layer compliance accountability on top: inspections, calibration certifications, and environmental standards attach legal consequence to failures that persist. The combined effect is that when a pump fails, there is a named, certified party responsible for its service — and clear consequences for inaction.

This did not happen because petroleum equipment is simpler or because the industry is better managed. It happened because decades of regulatory pressure, brand enforcement, and market consequence gradually forced accountability to concentrate at identifiable points. No one designed the outcome; the structure emerged under pressure.

EV charging is earlier in that process. The NEVI 97% uptime requirement is a meaningful signal that regulatory pressure is beginning to build. But the institutional infrastructure required to enforce accountability—defined responsible parties, contractual consequences, audit mechanisms—has not yet followed the regulatory intent despite the desire of those who are actually spending the money to deploy the hardware.


The Four Accountability Gaps

The accountability structure of EV charging contains four distinct gaps. Each is the product of how the ecosystem assembled, not a deliberate design choice. Together, they describe the space in which reliability failures accumulate without a clear owner.


The Telemetry Gap

The manufacturer holds the data. The network operator receives a meaningful subset of information and may make the decision on service (or may partner with a reliability focused organization like ChargerHelp). Accountability for diagnostic accuracy sits clearly with none.

As established in the prior article, the richest diagnostic information available for a DC fast charger resides inside the charger itself, accessible through manufacturer-controlled interfaces that are not broadly exposed to network operators or third-party service providers. What reaches the CSMS layer is an abstracted subset—sufficient for session management, insufficient for confident triage of the failures that resist easy classification.

The accountability consequence is structural: the party best positioned to diagnose a failure is not the party contractually responsible for resolving it. When diagnosis requires escalation to the manufacturer, the service timeline is governed by the manufacturer’s internal workload, not by the operator’s uptime obligation. The operator is accountable for the outcome but does not control the information required to produce it.


The Warranty Boundary Gap

As stations increasingly age out of warranty, accountability for hardware failure becomes genuinely contested.

Manufacturer warranties for DC fast chargers typically run two to three years. Extended service agreements exist but are not universally purchased. As the installed base ages, a growing portion of the national fleet operates in a gray zone: hardware failures occur, but no party has a clear contractual obligation to remediate them at speed. The manufacturer’s obligation has expired. The operator’s service agreement may not cover the cost of replacement components. The site host is rarely a party to the conversation at all.

The result is a predictable failure mode: hardware failures in post-warranty equipment generate extended outages not because the physical repair is technically complex, but because the accountability chain for authorizing and funding it has dissolved. No single party is on the hook. Resolution waits on negotiation.  The fact that many of the “pioneers” that originally deployed the stations have moved on to bigger and better jobs only exacerbates the problem.


The Dispatch Authorization Gap

Field service is authorized by the network operator but executed by a third party, often without the diagnostic data required to scope it correctly.

This article series has explored the dispatch stack in prior installments. The accountability dimension of dispatch failure is worth naming separately here. When a truck roll fails to resolve the issue on the first visit—because the scope was incorrect, the parts were wrong, the tech didn’t have the knowledge or the diagnosis was incomplete—accountability for that failure is diffuse. The network operator authorized dispatch based on available information. The service provider executed the scope as defined. The manufacturer’s telemetry was not accessible to either party at the time of the decision.

No single party made an error. The system produced a failed outcome. And because no contract assigns accountability for first-time fix failure across the full chain, the cost is absorbed without consequence—which means the underlying condition that produced the failure is not systematically addressed.


The Site Host Gap

Site hosts control access, utility coordination, and the physical environment. They bear no contractual accountability for uptime.

Site access windows, utility contact availability, breaker access, and physical site conditions are all factors that materially affect how quickly an outage can be resolved. In many cases, a station remains down not because the technical issue is unresolved, but because the site host is unavailable to provide access, or because utility coordination has not been established in advance.

The site host is typically the least operationally engaged party in the reliability chain. Hosting agreements define terms for site use and power provision; they rarely define uptime obligations, response windows for access requests, or consequences for delays that extend outage duration. The site host influences reliability outcomes without being accountable for them.


Why Diffuse Accountability Is an Economic Problem

In a prior article in this series I described reliability as having an economic governor—a rising marginal cost curve that makes the final increments of uptime disproportionately expensive to achieve. Diffuse accountability is a primary mechanism through which that governor tightens.

When accountability is concentrated—when a named party is clearly responsible for the outcome, has the data access required to diagnose the problem, and bears the contractual consequence of failure—resolution is an execution problem. The responsible party has every incentive to move quickly and every tool required to do so.

When accountability is diffuse, resolution becomes a coordination problem. Each cross-boundary failure must be negotiated across parties who do not share a single operational truth, who have different incentives for speed, and who operate under different contractual time horizons. The technical intervention itself may be straightforward. The coordination overhead required to authorize and execute it is not.

This is precisely the dynamic that concentrates at the hard end of the reliability distribution. The failures that are easiest to resolve—misconfigurations, obvious resets, procedural gaps—rarely require cross-boundary coordination. They can be resolved within a single party’s authority and increasingly with AI-automated workflows. The failures that resist easy classification are, almost by definition, the ones that span organizational boundaries: they require telemetry the manufacturer holds, authorization the operator must grant, access the site host controls, and execution the field service provider must deliver. Each gap in the accountability chain adds friction. Friction raises cost. Cost raises the marginal expense of the next uptime point.

The governor, in other words, is not simply technological. It is contractual.


What Mature Accountability Architecture Requires

The conditions required for accountability to function in a multi-party system are not complicated to state, even when they are difficult to implement.

First, there must be a named responsible party for uptime outcomes at the site level. Not a party that bears some portion of responsibility, but a party that is contractually on the hook for the aggregate outcome. In petroleum retail, the branded operator or franchise holder typically serves as the named accountability point—a role enforced through franchise agreements and brand standards rather than regulation alone. In EV charging, the network operator or the service and reliability provider are the most natural candidates—but only if contractual obligations are structured to match that responsibility, which at present they frequently are not.

Second, data access rights must follow service authority. A party that is accountable for a reliability outcome must have access to the diagnostic information required to produce it. The current arrangement—in which the most diagnostic data is held by a party that is not accountable for the operational outcome—is architecturally incoherent. Either data access expands to match service accountability, or accountability contracts to match data access. Sustaining both in their current misaligned state ensures that the accountability gap persists.

Third, escalation paths must carry defined authority, not merely defined sequence. A process that routes a failure from Tier 2 to the manufacturer and back does not resolve the accountability gap if no party along that path has the authority—and the contractual obligation—to drive the issue to resolution within a defined window.

Fourth, site host obligations must be operationalized. Hosting agreements that are silent on access response windows and utility coordination timelines are not neutral—they passively extend outage duration by removing any consequence for delay. Bringing site hosts into the accountability architecture does not require converting them into operational partners. It requires aligning the terms of their obligations with the operational realities that their decisions affect.

None of this will happen through voluntary coordination alone. The petroleum industry’s accountability architecture did not emerge from a round table. It emerged from regulatory pressure, brand enforcement, and the accumulated market consequence of operators who could not maintain their equipment. EV charging is following the same arc. The NEVI 97% uptime requirement—and state-level programs such as NYSERDA’s that incorporate it by reference as a condition of funding—are early checkpoints. Enforcement mechanisms, audit frameworks, and contractual standards will follow—as they always have when infrastructure reaches the scale at which diffuse accountability becomes publicly visible.


The Question Behind the Question

This series has traced a consistent argument across nine articles: that EV charging reliability is a systems problem, not a hardware problem; that it depends on operational layers that the industry undervalues; that AI and data infrastructure improve outcomes only when they are placed where consequential decisions are made; and that the marginal cost of uptime rises as fragmentation prevents the system from operating coherently.

The accountability question is the substrate beneath all of those arguments. Better dispatch stacks, more capable Tier 2 functions, deeper telemetry access, a scaled and trained workforce—none of these operate at full effectiveness inside a system where no single party is clearly on the hook for the aggregate outcome. The technical and operational improvements the industry is working toward are necessary. They are not sufficient unless the governance architecture that assigns accountability for outcomes is also resolved.

Reliability is not simply a technical system. It is a governance system. And governance requires that someone, somewhere, owns the answer to the question when the station is dark.

*From pump to plug, that lesson has not changed.*

Pump to Plug — Part IX: Who Owns Reliability?

Pump to Plug — Part IX: Who Owns Reliability?

The Accountability Gap Nobody Wants to Talk About The previous article in this series...

Pump to Plug — Part VIII: The Data Behind the Dashboard

Why EV Charging Data Is More Fragmented Than the Chargers At nearly every conference...

In this article