How OpenAI’s ‘Stargate’ Could Supercharge Nvidia: The Realistic Upside
ArticleSeptember 1, 2025

How OpenAI’s ‘Stargate’ Could Supercharge Nvidia: The Realistic Upside

CN
@Zakariae BEN ALLALCreated on Mon Sep 01 2025

Nvidia has emerged as one of the biggest winners of the AI boom thus far. However, a new wave of spending from OpenAI and Microsoft could potentially take demand to new heights. Reports of a mega-scale AI data center project, code-named ‘Stargate,’ have investors pondering a crucial question: What could this mean for Nvidia?

What is ‘Stargate’ and Why It Matters

In 2024, multiple sources reported that OpenAI and Microsoft have been exploring the construction of a next-generation AI supercomputer and data center campus known as ‘Stargate.’ The projected cost is substantial—potentially around $100 billion over several years, contingent on design choices and timelines. The Information was the first to delve into this concept, and Reuters quickly summarized the scope and implications (Reuters).

The rationale here is quite straightforward: Training and operating advanced AI models demands immense computing power, energy, and robust networking. As the size and use of these models increase, so does the requirement for specialized accelerators, high-bandwidth memory, and ultra-fast connectivity. This is precisely the arena where Nvidia currently excels.

Where Nvidia Fits In

Nvidia has a commanding presence in the AI accelerator market, primarily through its data center GPUs and networking solutions. In 2024, the company introduced its Blackwell platform, which includes the GB200 and the rack-scale NVL72 system, specifically engineered for generative AI on a massive scale (Nvidia). Beyond chips, Nvidia provides the networking fabric that connects clusters, ranging from InfiniBand to its advanced Spectrum-X Ethernet for AI applications (Nvidia).

As OpenAI’s key partner, Microsoft is a leading buyer of AI computing power. While Microsoft is in the process of developing its own AI accelerator (Maia) and actively evaluating AMD’s MI300 series, it still relies significantly on Nvidia for training and inference at scale (Microsoft). In essence, even amidst efforts to diversify, Nvidia remains a cornerstone of contemporary AI infrastructure.

How Much Could ‘Stargate’ Add for Nvidia? A Scenario View

While it’s impossible to pinpoint the precise mix of chips, servers, networking, power, and construction costs that ‘Stargate’ will entail, we can outline some reasonable estimates based on publicly available figures and industry standards:

  • Project scale: Reports suggest a potential multi-year budget nearing $100 billion if the project is fully realized (Reuters).
  • Accelerator share: In extensive AI clusters, hardware (like GPUs or dedicated AI chips) often represents the bulk of system costs. A typical planning assumption in hyperscale AI projects is that 50% to 70% of costs are for accelerators and tightly connected server hardware.
  • Nvidia share: Though Microsoft is advancing its Maia accelerator and testing AMD’s MI300X, Nvidia remains the established player. Therefore, a reasonable scenario might expect Nvidia to capture around 40% to 60% of the accelerator spending given the competitive landscape.

Combining these estimates leads to several possible outcomes:

  • Conservative case: $100B total x 50% on accelerators x 40% Nvidia share = approximately $20B in Nvidia hardware revenue spread over several years.
  • Mid-case scenario: $100B x 60% x 50% = roughly $30B to Nvidia throughout the build-out.
  • Upside case: $100B x 70% x 60% = around $42B to Nvidia over multiple years.

These figures do not anticipate Nvidia capturing all spending, nor do they account for Nvidia’s high-margin networking, software, or services, which could contribute additional revenue. These numbers reflect multi-year totals rather than immediate quarterly impacts. For context, Nvidia reported $22.6B in data center revenue in the most recent quarter available in 2024 (Nvidia Q1 FY2025).

Regarding unit economics, AI accelerators like Nvidia’s H100 have been widely reported to cost tens of thousands of dollars each, depending on configurations and contracts, with full systems costing significantly more (CNBC). The Blackwell-class systems are designed for even larger-scale setups and may command premium pricing.

Timeline and Dependencies

If ‘Stargate’ proceeds as planned, financial outlays would likely be phased over several years, aligning with the rollout of new AI models and services as grid power and cooling solutions are secured and supply chains are established. Key points to monitor include:

  • Microsoft’s capex guidance and insights on AI infrastructure, which has been increasing significantly with the rising demand for Azure AI (Microsoft earnings).
  • Nvidia’s product rollout and supply—especially regarding the availability of Blackwell systems and networking capacity—will influence the pace of cluster construction (Nvidia).
  • OpenAI’s model roadmaps and growth in usage will determine the compute demand. Sam Altman has indicated the need for significantly more compute and a broader chip supply to achieve next-generation AI goals (Reuters).

Key Wild Cards

  • Alternative accelerators: Should Microsoft transition significantly towards its Maia chips or enhance AMD GPU usage, Nvidia’s portion of accelerator spending could diminish. AMD’s MI300X platform is already shipping for extensive inference and training operations (AMD).
  • Power and facilities: Securing grid power, water supplies, and land for large data centers can slow timelines and affect when hardware revenues can be realized.
  • Software efficiency: Breakthroughs in model architectures, sparsity, and compilers can lower the compute required per unit of work, reducing hardware demand.
  • Macro and regulation: Export controls, supply chain disruptions, or permit delays could impact deployment strategies.

Bottom Line

The projected figure surrounding ‘Stargate’ is astronomical, yet the takeaway for investors is simpler: OpenAI’s and Microsoft’s upcoming infrastructure buildout for AI indicates sustained, multi-year demand for advanced AI systems. Even under conservative projections, Nvidia stands to gain tens of billions in cumulative revenue from a project of this magnitude, with further upside potential if Blackwell adoption remains robust and Nvidia’s networking and software stack expands its market share.

FAQs

What is ‘Stargate’ in simple terms?

A proposed mega-scale AI supercomputer and data center campus being considered by OpenAI and Microsoft to enhance future models and services. It’s a multi-year infrastructure plan rather than a single location. Source.

When could Nvidia see revenue from this?

Incrementally, as different phases are approved and developed. Due to lead times for chips, networking, and facilities, the financial impacts are likely to extend over several years rather than manifesting all at once.

Will Microsoft utilize its own Maia chips instead of Nvidia’s?

Microsoft is working on Maia to mitigate dependence on third-party suppliers, but currently, and at scale, Nvidia continues to be a crucial supplier. A realistic expectation is a blend of Nvidia, Microsoft, and AMD hardware. Source.

Could AMD gain more than Nvidia?

AMD has favorable positioning with its MI300X and an enhancing software ecosystem. It may capture significant market share, especially in inference-heavy setups. However, Nvidia’s extensive ecosystem and maturity still provide an advantage in tackling the most complex, large-scale training tasks. Source.

How substantial are these AI chips and systems in financial terms?

Individual accelerators have been reported to cost tens of thousands of dollars each, with full server nodes and racks reaching into the hundreds of thousands or even millions, depending on their configurations. Source.

Sources

  1. Reuters – OpenAI, Microsoft discuss $100 billion ‘Stargate’ AI supercomputer
  2. Nvidia – Blackwell platform announcement
  3. Nvidia – Spectrum-X Ethernet for AI
  4. Nvidia – Q1 FY2025 data center revenue
  5. Microsoft – Introducing Maia and Cobalt accelerators
  6. Microsoft – Earnings materials and capex commentary
  7. CNBC – Nvidia H100 explainer and pricing context
  8. AMD – Instinct MI300X product page
  9. Reuters – Sam Altman chip funding ambitions

Thank You for Reading this Blog and See You Soon! 🙏 👋

Let's connect 🚀

Share this article

Stay Ahead of the Curve

Join our community of innovators. Get the latest AI insights, tutorials, and future-tech updates delivered directly to your inbox.

By subscribing you accept our Terms and Privacy Policy.