Satya Nadella Discusses AI as “Heavy Industry” and the Return of Scaffolding

Microsoft CEO Satya Nadella views AI not as magic but as a “heavy industry,” emphasizing the strategic importance of “scaffolding”—the infrastructure and tools—over models themselves, as detailed in a tour of Microsoft’s Fairwater 2 data center and a discussion on the future of AI business models and geopolitical considerations.
LLM
AI
Podcast
Author

Junichiro Iwasawa

Published

November 27, 2025

An interview with Microsoft CEO Satya Nadella, conducted by Dwarkesh Patel and Dylan Patel of SemiAnalysis, has been released.

The setting was Microsoft’s state-of-the-art data center, “Fairwater 2”. Surrounded by over 5 million fiber optic cables in what is considered the world’s most powerful AI training cluster, the conversation was not about a glittering roadmap of generative AI, but rather the extremely physical and industrial reality of how to deploy and recoup massive capital in “the heavy industry known as AI.”

The interview covers a wide range of topics, but what is particularly interesting is Nadella’s reframing of AI not as “magic” but as an “industry,” and his unwavering bullish stance that the winning opportunity lies not in the AI models themselves, but in the Scaffolding that supports them.

In this post, we delve into Microsoft’s “AI Industrialization” strategy and the business model for the post-AGI era envisioned by the King of SaaS, as revealed in this dialogue.

From Software Company to “Heavy Industry”

“Welcome to the software company.”

Nadella said this with a touch of irony amidst the roar of the data center. Looking up at a massive block of concrete not yet filled with server racks, it was a moment acknowledging that Microsoft has transformed from a company that simply writes code into a heavy industry managing power, heat, and silicon.

According to Scott Guthrie (EVP of Cloud & AI), Microsoft plans to increase training capacity by 10x every 18 to 24 months. This means a scale 10 times the computational resources used to train GPT-5. This explosive increase in CAPEX (capital expenditure), as Dylan Patel points out, surpasses the speed at which railways and power grids were once laid.

What is important here is that Nadella justifies this huge investment through “Knowledge Intensity.” It is not just about throwing money at lining up GPUs. By injecting “software expertise”—cooling efficiency, network topology, and workload optimization—into the hardware, they aim to suppress COGS (Cost of Goods Sold) and maximize return on investment. He calls this the “Hyperscale Business,” defining it as a highly industrialized process distinct from mere hosting providers.

Scaffolding vs Model: Where Does the Value Reside?

The highlight of the discussion is the question: “Does the value of AI reside in the model, or does it reside in the surroundings (Scaffolding)?”

Dylan Patel and Dwarkesh pressed Nadella on whether, as model performance improves and AI agents become capable of operating PCs autonomously, existing UIs and tools (like Excel or VS Code) would become mere waypoints, with the majority of value flowing to companies with the “smartest models” (such as OpenAI or Anthropic). In response, Nadella consistently maintained the “superiority of Scaffolding.”

Nadella’s logic is as follows: Models themselves will commoditize. With the rise of Open Source models and a situation where multiple frontier models (GPT, Claude, Gemini) coexist, it is unlikely to become a “Winner-take-all” scenario where a single model dominates everything. Rather, the “Scaffolding” that integrates these models into business flows, links them with data, and applies governance will be the differentiating factor.

For example, regarding “Excel Agent”, he explains that it is not just a wrapper, but the embedding of an “analyst” that understands Excel’s native logic and tools. The smarter the model becomes, the more it requires context and access rights to data (Security/Identity) to unlock its capabilities. In other words, the reading is that “the smarter AI becomes, the higher the dependency on Microsoft’s infrastructure (Office 365 data graph and GitHub repositories).”

This is an idea typical of Microsoft, which once seized hegemony by controlling the “platform known as the OS,” but it is simultaneously a risky bet. If AI truly becomes human-like and behaves freely across the barriers of OS and tools, it might leap over even the Scaffolding. However, Nadella is attempting to hedge this risk by controlling the “meta-layer” that manages and monitors all AI agents through initiatives like the “Agent HQ” concept.

Copilot vs The World: The Fortress of GitHub

Nadella remains calm regarding the intensifying competition in the coding domain. Facing the current situation where emerging forces like Cursor and Claude Code are threatening GitHub Copilot’s share, he declared, “I love that chart” (referring to a diagram showing the rise of competitors). He argues that having completely new, AI-native companies as competitors, rather than past rivals like Borland, is proof that the market is expanding healthily.

Microsoft’s strength here lies, after all, in the overwhelming data accumulation device that is GitHub. No matter what AI agent writes the code, the final code is saved and managed in a GitHub repository. Nadella is attempting to evolve GitHub from a mere code storage space into a “platform” where diverse AI agents operate in coordination.

Whether a user uses Claude or OpenAI, as long as Pull Requests are flying around on GitHub in the end, Microsoft can remain at the center of that ecosystem. This can be said to be a strategy beyond “selling pickaxes in a gold rush”—it is a strategy of “renting out the safe where the results of using the pickaxes are stored.”

Geopolitics and Sovereign AI: Satya as a Diplomat

In the latter half of the interview, the topic shifted to geopolitical risks. How will Microsoft maneuver in a bipolar world defined by US-China friction?

Here, Nadella emphasized the concept of Sovereign AI. The desire for governments and companies in each country to manage AI under their own data, their own computational resources, and their own regulations will only grow stronger. In response, Microsoft’s strategy is not to simply impose American technology, but to win Trust by providing “local AI infrastructure” tailored to the circumstances of each country.

Just as having TSMC factories located in Taiwan is a risk, the concentration of AI computational resources in specific regions becomes a national security risk. Nadella aims to productize this “trust” through a distributed data center network and the assurance of data residency (physical storage location of data). This is a technical challenge, but at the same time, an extremely sophisticated diplomatic move.

MAI and OpenAI: Not Two-Timing, but a “Portfolio”

Finally, we must touch upon the relationship between Microsoft’s internal AI development unit, “Microsoft AI (MAI)”, and OpenAI. Although there is speculation that “Microsoft might be giving up on OpenAI and shifting to in-house development,” Nadella’s answer is strictly a “portfolio strategy.”

He emphasized a division of labor: maximizing the use of OpenAI’s models as the “frontier,” while developing models optimized for cost efficiency, latency, and specific product requirements in-house (MAI). This is a natural conclusion, as processing every workload with the top-tier GPT-4 lacks economic rationality.

However, reading between the lines, an intention to “reduce dependence on OpenAI and secure autonomy” is also visible. The fact that they have secured rights to use OpenAI’s models for the next 7 years while simultaneously gathering top talent like Mustafa Suleyman (DeepMind co-founder) internally to form a proprietary “Superintelligence Team” is significant. Microsoft is steadily preparing a structure where it can enjoy the fruits if OpenAI succeeds, and cover with its own technology should OpenAI stumble.

Conclusion: Towards the “General Trading Company” of the AI Era

What emerges from the dialogue with Dwarkesh Patel is the figure of a Microsoft that, without getting carried away by the generative AI boom, is cool-headedly solidifying its “industrial foundation.”

While participating in the AI model development race (the gold rush), they are simultaneously moving to seize the infrastructure (railroads), tools (pickaxes), and storage locations (banks) for it. The future Nadella speaks of is not so much the arrival of a sci-fi singularity, but rather a vision—in a sense more ambitious and formidable—where AI spreads to every corner of the world like electricity or gas, and Microsoft manages all of those faucets.

Will Scaffolding dominate the Model, or will the Model render Scaffolding ineffective? It will take time for the answer to this question to emerge, but at the very least, Microsoft continues to pour enormous capital into every layer, from physical world data centers to the application layer, so that it can succeed whichever way things fall. This is truly a heavyweight performance.