Meta Platforms established Meta Compute on January 12, 2026, a top-level corporate initiative designed to centralize the engineering and financing of massive AI infrastructure. The new unit targets the construction of tens of gigawatts of capacity this decade, with a long-term roadmap reaching hundreds of gigawatts. This move elevates infrastructure from a backend cost center to a primary strategic moat.

Leadership reflects a dual focus on technical execution and geopolitical financing. Santosh Janardhan, Meta’s head of global infrastructure, oversees technical architecture and silicon development, while Daniel Gross, recently of Safe Superintelligence, manages capacity strategy and supplier partnerships. Crucially, Meta appointed former Goldman Sachs executive and White House advisor Dina Powell McCormick as President and Vice Chair to lead sovereign partnerships and infrastructure financing.

The initiative formalizes Meta’s commitment to spend $600 billion on U.S. infrastructure by 2028. To secure the necessary baseload power, Meta recently signed agreements for 6.6 gigawatts of nuclear energy from partners including Vistra Corp., Oklo Inc., and TerraPower LLC.

Why It Matters

Meta Compute represents a transition from “hyperscale tenant” to “sovereign-scale owner”. By owning the full stack—from custom silicon and 100-plus-megawatt data centers to long-term nuclear power purchase agreements—Meta effectively bypasses the volatility of the public utility grid and the wait times of traditional cloud providers.

For competitors, the barrier to entry for frontier models is no longer just talent or algorithms; it is the ability to finance and permit gigawatt-scale energy projects. The inclusion of McCormick suggests that future AI scaling will require nation-state levels of coordination, treating compute as critical national infrastructure rather than a private corporate asset.

Founder Action

Founders should audit their reliance on generic cloud capacity. As compute becomes a sovereign resource, “spot” availability for training runs will likely tighten. Startups building at the application layer must prioritize “efficiency-first” architectures to hedge against rising inferencing costs as energy-heavy compute becomes the new oil.

For infrastructure operators, the mandate is clear: build deep relationships with regional energy authorities and sovereign wealth funds now, as power—not just chips—will be the primary bottleneck through 2030.