A thousand times. That was the margin of error sitting inside one of the most-cited environmental indictments of the AI industry. In her widely praised book Empire of AI, journalist Karen Hao wrote that a proposed Google data center near Santiago, Chile, could require more water than the entire local population consumed — by a factor of one thousand. The figure traveled. It appeared in briefings, op-eds, and regulatory submissions. Then, last month, Hao posted a public correction: a unit misunderstanding had inflated the number by roughly 1,000x. The data center’s projected water use, properly converted, was not catastrophic. It was, by regional standards, unremarkable.
That correction is not a vindication of the AI industry. But it is an indictment of the information environment surrounding it — and a signal that the AI water consumption myth has accumulated enough mass to distort policy, investment, and public understanding in ways that will take years to untangle. The question worth sitting with is not whether AI uses too much water. It is whether we have any reliable mechanism for knowing how much it actually uses at all.
The Nuclear Power Plant Playbook, Running Again
In the 1970s, nuclear energy’s opponents circulated thermal pollution estimates suggesting that reactor cooling systems would raise river temperatures enough to kill fish populations across entire watersheds. Some projections were accurate. Many were not. The methodology behind them varied wildly — some measured water withdrawal, others measured water consumption, and the two figures can differ by an order of magnitude depending on whether the water is returned to its source after use. Regulators took a decade to standardize the terminology. By then, the public narrative had calcified around the worst-case numbers, and it never fully recovered. The industry’s actual thermal footprint, which varied enormously by reactor design and geography, was largely beside the point politically.
The parallel is uncomfortable for anyone who wants the AI industry to face genuine accountability. Because the mechanism is identical: a measurement methodology gap, exploited — not always deliberately — by both critics seeking leverage and companies seeking cover. TechPolicy.Press documented in November 2024 that there is still no standardized framework for calculating AI’s water footprint, with companies choosing between water withdrawal metrics and water consumption metrics based largely on which makes their disclosures look better. That choice produces numbers that are technically accurate and practically incomparable.
What “25% Less” Actually Means, and Why the Units Matter
The headline figure — that AI data centers use roughly 25% less water than utilities commonly report — emerges from the distinction between withdrawal and consumption. Withdrawal counts every liter that enters a cooling system. Consumption counts only what is lost to evaporation and cannot be returned. Most utility-facing disclosures use withdrawal. Most environmental impact assessments, implicitly, assume consumption. The gap between the two can reach 4:1 in facilities using recirculating cooling towers, which describes the majority of large-scale data centers built since 2018.
So when a facility reports withdrawing 1 million gallons per day, the consumed figure — the water genuinely removed from the local water cycle — may be closer to 250,000 gallons. That is still a large number. In a water-stressed region, it matters enormously. But it is not the same number, and treating it as such is how the AI water consumption myth compounds. Wired’s account of the Hao correction makes clear that the Santiago error was specifically a unit conversion failure — the kind that becomes invisible when editors, fact-checkers, and even primary researchers are not fluent in the difference between liters per second, cubic meters per day, and annual megagallon totals.
“We have withdrawal numbers from most major operators. We have almost no consumption numbers verified independently. What we are regulating, in practice, is a proxy for the actual impact.”
— Senior water policy researcher, Pacific Northwest utilities commission
Where the Numbers Actually Live
| Reporting Metric | Typical Source | What It Measures | Common Error |
|---|---|---|---|
| Water Withdrawal | Utility filings, ESG reports | Total intake before return | Treated as equivalent to consumption |
| Water Consumption | Engineering audits, academic studies | Net loss to evaporation/discharge | Rarely verified by third parties |
| Water Use Intensity (WUE) | Voluntary company disclosure | Liters per kilowatt-hour of IT load | Denominator definitions vary by operator |
| Per-Query Estimates | Academic papers, media reports | Water cost per ChatGPT query, etc. | Training vs. inference conflated; hardware generations ignored |
The per-query estimates deserve particular scrutiny. A figure that circulated widely in 2023 — that a single ChatGPT conversation consumes roughly 500 milliliters of water — came from a University of California study that the authors themselves noted was based on 2021-era hardware and cooling infrastructure. Newer data center designs, including Microsoft’s closed-loop systems and Google’s facilities using reclaimed wastewater, operate at materially lower water intensities. The 500ml figure remained in circulation anyway, because corrections travel slowly and alarming numbers travel fast.
The Companies Are Not Innocent. Neither Are Their Critics.
There is a version of this story where the industry is the villain and the math errors are incidental. That version is incomplete. Microsoft, Google, and Meta all publish Water Use Efficiency metrics — but each defines the denominator differently, making cross-company comparison largely meaningless without normalization work most journalists and policymakers do not perform. A study highlighted by the SF Examiner found that the volume of water described in certain AI impact assessments — when converted to plastic bottle equivalents for public consumption — generated headlines far more alarming than the underlying cubic-meter figures warranted. The bottle conversion was not wrong. It was chosen because it communicated scale vividly, and scale is what gets coverage.
The companies, meanwhile, have every structural incentive to report withdrawal rather than consumption, to exclude Scope 3 water embedded in chip manufacturing, and to benchmark their current facilities against their oldest infrastructure rather than against best-in-class competitors. The result is a market for AI water consumption myth-making that is genuinely bipartisan: advocates who need the numbers large, and operators who need them small, both working from the same opaque well.
The investor community is beginning to notice. Not because of the environmental argument — that case has been made repeatedly without changing capital allocation — but because regulatory standardization is now a credible near-term risk. The EU’s Corporate Sustainability Reporting Directive requires water consumption disclosure starting in 2025 for large companies. When consumption figures become mandatory rather than voluntary, the gap between what has been reported and what is actually happening may become a liability event rather than a reputational one.
The Correction That Reveals the Ecosystem
Hao’s correction — which she made publicly and with specificity — is, by the standards of the current information environment, unusually responsible. She named the error, quantified it, and credited the person who caught it. What it reveals, though, is not a flaw in her reporting specifically. It reveals that the entire ecosystem producing claims about the AI water consumption myth operates without the verification infrastructure that would catch unit errors before publication. The peer review process in hydrology would have flagged a 1,000x discrepancy. The publication process for narrative nonfiction, under deadline and competitive pressure, did not.
That is not primarily a media criticism. It is a data infrastructure criticism. Academic hydrologists have the tools to verify these claims — and some are doing so, belatedly. But the absence of mandatory, standardized, third-party-verified water reporting from AI operators means that even rigorous researchers are working from incomplete inputs. The question of whether any given data center consumes 100,000 gallons or 1,000,000 gallons per day is currently, in most jurisdictions, unanswerable from public data alone.
There is one complication worth stating plainly: even if every measurement error were corrected tomorrow, the aggregate trajectory of AI infrastructure buildout — hundreds of new facilities planned through 2030, many sited in water-stressed regions across the American Southwest, the Middle East, and southern Europe — may still represent a meaningful stress on local hydrology. The math may be wrong and the concern may still be right. Precision does not equal absolution.
FetchLogic Take
By the end of 2026, at least one major AI operator will face a material regulatory penalty — not for water use itself, but for the gap between previously disclosed withdrawal figures and newly mandated consumption disclosures under EU CSRD reporting rules. The penalty will be the moment the AI water consumption myth is replaced by something more damaging: documented evidence that the real numbers were knowable and withheld. At that point, the conversation shifts from science communication failure to corporate governance failure, and the valuation implications for the operator in question will be measurable in basis points, not reputation cycles. That transition will happen faster than the industry currently expects.
Related Analysis
The Patient Who Wasn’t in the Room: Who Bears the Cost When AI Medical Diagnosis Outperforms DoctorsMay 3, 2026
Spotify’s ‘Verified Human’ Badge Bets on an Assumption That May Not HoldMay 2, 2026Anthropic’s Kill Switch: How Claude Code Now Blocks Competitors by NameMay 1, 2026
27,000 AI Carb-Count Requests, Zero Consistency: Why Medical AI Cannot Yet Replace Human JudgmentApr 30, 2026