TE_MANUAL.1.5 // ANALYSIS · APPENDIX [CLASSIFICATION: PUBLIC]
// APPENDIX · MEMORY BOTTLENECK DRAM IS THE NEW GOLD.

The bottleneck of the AI economy migrated again. After chips and power, the binding constraint of 2025–2026 is high-bandwidth memory. Samsung, Micron, and SK hynix are the new central banks of compute.

HBM PRICE / GB
↑ 4.8×
vs. 2023 baseline
HBM CAPEX 2026
~$60B
across 3 vendors
GDDR6 LEAD TIME
52w+
consumer GPU stack
SUPPLY SHARE
3 / 3
vendors · global supply
01WATCH · LISTEN · EXPLORE

Three formats. One thesis.

Video, audio, and a slide deck — for whichever way you metabolise an idea.

02THE SITUATION · AI-DRIVEN MEMORY SHORTAGE

The bottleneck moved again.

From chips (2023) to power (2024) to memory (2025–2026). Each constraint priced. Each repriced. The market keeps moving.

In late 2023, the binding constraint on AI was silicon. NVIDIA could not ship Hopper fast enough; lead times stretched past a year; the value of every H100 in inventory tripled. By the end of 2024, NVIDIA had ramped Blackwell production fast enough to clear that bottleneck. The new constraint became power — a 500 MW data centre in Loudoun County, Virginia, could not be powered before 2032 because the grid interconnection queue had collapsed. The brownfield repower thesis priced.

By the time The NVIDIA Innovator's Dilemma went to press in April 2026, the bottleneck had migrated again. Chips were available. Power was being repowered. The new binding constraint was memory — specifically, high-bandwidth memory, and behind it, the entire DRAM stack.

"A modern AI data centre is, increasingly, a vault of high-bandwidth memory with some silicon attached."

The fight over Samsung, Micron, and SK hynix capacity in 2025 and 2026 — for HBM3, HBM4, GDDR7, and the long tail of LPDDR — is the next chapter of the same story. The vendors got religion in 2024 and are now the most strategically priced semiconductor companies on earth, more strategically priced than NVIDIA itself for some classes of workload.

This appendix exists to track the memory layer of the argument the book makes about silicon. The two are not separate stories. They are the same story with the bottleneck migrating along the value chain. Three bottlenecks in three years. The interesting question is what gets bottlenecked next — and the answer, the moment you write it down, is unembarrassed by ambiguity: result. The token. The unit of useful thought, delivered on time. That is the subject of the companion essay.

03RESEARCH · CURATED RESOURCES

Three places to dig in.

Long-form analysis, investor perspective, and the running press digest on chip-and-memory geopolitics.

04DISCUSS WITH AI · IN YOUR LANGUAGE

Eight languages. Eight assistants.

A test of how the same argument lands across linguistic frames, measured against the assistants that already index the literature.

05ESSENTIAL READING · DEEPER CONTEXT

Five books that frame the conversation.

If you want the upstream literature on chip geopolitics, capital, and the structural rise of computing as a strategic resource.

The NVIDIA Innovator's Dilemma cover

// COMPANION VOLUME

The NVIDIA Innovator's Dilemma

This appendix is the memory layer of the silicon argument made in The NVIDIA Innovator's Dilemma. The book covers the chip; this page covers the bottleneck behind the chip; the companion essay covers what comes after both.

→ Read the book