I like these insights from an exec at Schneider Electric (key player in data center buildouts). " AI hardware is rewriting the data‑center energy equation. One rack of NVIDIA’s Blackwell 2 now draws roughly 180‑200 kW, driving whole‑site demand from ~300 MW to 1.2‑1.5 GW—a 4‑to‑5× jump in a single architecture cycle. The power spikes (“300 MW → 1.5 GW → 500 MW”) stress 30‑ to 50‑year‑old transformers and distribution assets that were never designed for such volatility, forcing utilities and operators to rethink grid reinforcement and on‑site generation strategies. Latency, not land, decides geography—hence the rise of “metro micro‑data‑centers.” Hyperscalers still prefer desert “mega” sites, but real‑time workloads (trading, high‑speed manufacturing QC, robotic assembly) cannot tolerate the round‑trip to remote super‑clusters. Result: data‑center footprints are pushing back into dense regions (e.g., the Pennsylvania AI Hub) and even onto factory floors, where edge or “micro‑cloud” rooms house mixed racks of legacy PLCs, Xeon/EPYC CPUs and top‑tier GPUs to keep inference inside the 20–50 ms envelope. A nuclear “mini‑renaissance” is being financed by the cloud giants. Microsoft has already inked a 20‑year offtake to restart Three Mile Island Unit 1; SMR vendors (GE, Westinghouse, Rolls‑Royce) are positioning small reactors as drop‑in baseload for hyperscale campuses. Operators admit the engineering and regulatory path is novel and capital‑intensive, but see nuclear as the only carbon‑neutral, 24 × 7 supply that scales with AI demand. Cooling and on‑site auxiliaries are the next investment hotspot. Traditional single‑phase cooling is already inadequate; multiphase liquid systems are becoming mandatory for Blackwell‑class thermal loads. Operators are evaluating dedicated diesel or gas gensets—and ultimately SMRs—to run “utility loads” (cooling, ventilation, security) locally, cutting grid draw by ≈40 % and easing interconnection bottlenecks. These niches create a fresh capex cycle for thermal‑management OEMs and distributed‑generation providers. No pause in sight—chip roadmaps and software volatility keep capex on the accelerator. With each GPU generation delivering higher density, data centers require continuous electrical and mechanical retrofits; hyperscalers blend constant‑load owned sites with burst capacity from Tier‑1 colos (Equinix, NTT, Compass, etc.). Software demand is even less predictable: emerging Chinese LLMs (DeepSeek, Alibaba, Tencent) and still‑unknown academic breakthroughs could further spike compute intensity, making any two‑to‑five‑year forecast highly uncertain. Investors should track four lenses simultaneously—academic, start‑up, hyperscaler, and capital‑allocation trends—to avoid blind spots. "
97,32K