
Can Google Outspend Microsoft on AI? What DeepMind’s CEO Really Means
The AI race is increasingly a capital race. In recent comments covered by The Next Web, DeepMind CEO Demis Hassabis signaled that Google aims to outpace Microsoft in artificial intelligence investment over the long run. Thats a bold statement in a market where both companies are already spending tens of billions of dollars a year on chips, data centers, and model development. What does this claim really meanand what should entrepreneurs and professionals watch next?
The big idea: investment will shape who leads in AI
Hassabiss message, as reported by The Next Web, is straightforward: Alphabet (Googles parent) plans to invest aggressively in the compute infrastructure and research needed to build frontier AI. Google has already rolled out its Gemini family of models across products like Search, Workspace, and Android, and its building custom chips and massive data centers to support the next generation of AI.
Microsoft, meanwhile, is funding AI via its deep partnership with OpenAI and by expanding Azures global infrastructure. Both companies are scaling at unprecedented speedand the outcome will hinge on who can secure compute, talent, and customers fastest.
Follow the money: capex, chips, and cloud capacity
Alphabet is ramping capex for AI
Alphabet told investors its capital expenditures would be meaningfully higher to meet AI demand, primarily driven by servers and data centers. The companys CFO reiterated throughout 2024 that AI infrastructure would remain a top investment priority as Google expands availability of Gemini across products and Google Cloud. Source.
Microsoft is scaling Azure for AI workloads
Microsoft guided that its AI-related capital spending would keep increasing sequentially to meet surging demand for Azure AI services and OpenAI workloads. In short: the spend curve is still up and to the right. Source.
The chip angle: Nvidia, TPUs, and custom silicon
- Google is expanding its in-house Tensor Processing Units (TPUs) and AI supercomputers. At Cloud Next, the company highlighted its AI Hypercomputer architecture and broader availability of TPU v5p, alongside new infrastructure options for training and inference. Source.
- Microsoft is diversifying beyond Nvidia with its own Azure Maia AI accelerator and Cobalt CPU while continuing to deploy massive volumes of Nvidia GPUs. Source.
Custom chips can lower cost-per-token and reduce dependence on constrained GPU supply. Whoever delivers the best price-performance at scale gains a decisive advantage for both training and serving large models.
Why the spend matters: models, margins, and moats
Better models and faster iteration
More compute enables larger training runs, longer context windows, and more frequent refreshes. Googles Gemini 1.5 signaled the companys push into long-context reasoning and multimodality across text, images, audio, and code. Source. Microsoft benefits from OpenAIs roadmap (e.g., GPT-4-class models and beyond) integrated across Copilot and Azure.
Cloud and productivity monetization
- Cloud: AI workloads drive demand for premium compute instances, managed model endpoints, vector databases, and orchestration tools. Margins can be attractive once data centers are utilized at scale.
- Productivity: Both firms layer AI assistants into Office/Workspace, Search/Ads, and developer tools. Upgrades like Copilot and Gemini for Workspace create recurring revenue.
- Platform effects: The more third-party developers build on Azure OpenAI Service or Google Clouds Vertex AI, the stronger each ecosystems moat becomes.
The constraint: power and energy
AI demands enormous electricity and cooling. The International Energy Agency estimates that by 2026, electricity consumption from data centers, AI, and crypto could roughly double compared with 2022 levels, underscoring the need for efficient chips and renewable energy. Source.
Google vs. Microsoft: where each stands now
- Strategy: Google pursues end-to-end integration (models, TPU hardware, cloud, apps). Microsoft amplifies via OpenAIs research engine and Azures global distribution.
- Supply chain: Both race to secure Nvidia GPUs while bringing their own chips online to control costs and capacity.
- Distribution: Microsoft leverages enterprise standardization on Microsoft 365 and Windows. Google leans on Search, Android, YouTube, and strong developer adoption in Google Cloud.
- Safety and governance: Both companies participate in emerging rules, from EU AI Act obligations to voluntary safety commitments in the U.S., shaping how frontier systems are trained and deployed. EU AI Act.
How Hassabiss claim fits the bigger picture
When the DeepMind chief suggests Google will outpace Microsoft in AI investment, its less about any single quarter and more about trajectory and intent. Practically, it means:
- Google may prioritize larger and more frequent training runs for frontier models, plus aggressive deployment into consumer and cloud products.
- Expect faster rollouts of TPU-based infrastructure and AI-optimized data centers.
- R&D and acquisitions that shore up multimodal reasoning, agentic capabilities, and safety evaluation.
Microsoft wont sit still. Its long-term partnership with OpenAI gives it a steady flow of cutting-edge models, and Azures scale helps convert that research into enterprise revenue. The companys 2023 announcement that it would extend its partnership with OpenAI, followed by rapid Copilot integration across the Microsoft stack, shows how quickly it can commercialize advances. Source.
What this means for entrepreneurs and teams
Opportunities
- Falling inference costs: As custom accelerators ramp, cost-per-token should trend down, enabling richer AI features at similar budgets.
- More managed services: Both clouds will keep shipping tools that shrink time-to-value (fine-tuning, RAG, vector stores, evaluation, observability).
- Better multi-modal: Expect improved handling of long documents, images, video, and audioand tighter integrations with office suites and developer platforms.
Risks
- Vendor lock-in: Deep integration with a single cloud can slow portability. Favor open formats, abstraction layers, and multi-cloud when possible.
- Model drift and reliability: Build evaluation and monitoring into your stack from day one.
- Compliance load: The EU AI Act and sector rules (health, finance, consumer protection) will require documentation, testing, and risk controls for higher-risk use cases. Source.
Practical next steps
- Budget on a glidepath: Assume AI unit economics improve quarterly, but also budget buffers for bursty usage and new features.
- Benchmark across providers: Compare price-performance on your workloads (context length, modalities, latency) across Azure and Google Cloud.
- Use startup programs: Apply for cloud credits and technical support while you validate product-market fit. See Microsoft for Startups Founders Hub and Google for Startups Cloud Program.
- Design for safety early: Adopt internal model cards, red-teaming routines, and human-in-the-loop controls for sensitive flows.
What to watch next
- Next-gen chips: The speed of Nvidias and custom silicon rollouts will shape cost and capacity curves.
- Frontier model updates: Look for longer context, lower latency, and stronger tool/agent performance from both ecosystems.
- Regulatory clarity: Implementation timelines for the EU AI Act and emerging U.S. guidance will influence rollout strategies.
- Ecosystem momentum: Which platform wins developers and enterprise pilots often predicts who wins the next budget cycle.
Bottom line
Hassabiss statement is a signal that Google intends to push hard on the AI frontier, from model research to data center buildouts. But Microsoft is equally aggressive, with OpenAI-powered products and a rapidly scaling Azure. For customers and builders, thats good news: more competition, faster innovation, and likely lower costs over time. The winners wont be decided by spending alonebut by how efficiently that capital turns into reliable, affordable, and useful AI.
FAQs
Is Google already spending more than Microsoft on AI?
Not consistently. Both companies are investing at massive, rising levels, and quarterly capex can vary. The key takeaway is intent: both are committed to long-term, double-digit-billion annual spend on AI infrastructure.
Why does AI require so much capital?
Training and running large models require specialized chips, high-performance networking, vast amounts of storage, and energy-efficient data centers. Building and operating that stack is extremely capital-intensive.
Does more spending guarantee better AI?
No. Spending accelerates iteration and scale, but research quality, data curation, evaluation, safety practices, and product integration determine real-world impact.
Will AI costs go down for businesses?
Over time, yes. As supply increases (more chips, more data centers) and models become more efficient, unit costs typically fall. In the short run, pricing can be volatile.
How do regulations factor into AI investment?
Rules like the EU AI Act and U.S. safety commitments influence how frontier models are trained, evaluated, and deployed. Compliance adds costbut also trust and market access.
Sources
- The Next Web: Google will outpace Microsoft in AI investment, DeepMind CEO says
- CNBC: Alphabet Q1 2024 earnings and capex outlook
- CNBC: Microsoft Q4 2024 earnings and AI capex guidance
- Google Cloud Blog: Cloud Next 24 AI updates (AI Hypercomputer, TPU v5p)
- Google: Gemini 1.5 and long-context updates
- Microsoft: OpenAI partnership announcement (2023)
- International Energy Agency: Electricity 2024 (data centers, AI, energy)
- European Parliament: EU AI Act
Thank You for Reading this Blog and See You Soon! 🙏 👋
Let's connect 🚀
Latest Insights
Deep dives into AI, Engineering, and the Future of Tech.

I Tried 5 AI Browsers So You Don’t Have To: Here’s What Actually Works in 2025
I explored 5 AI browsers—Chrome Gemini, Edge Copilot, ChatGPT Atlas, Comet, and Dia—to find out what works. Here are insights, advantages, and safety recommendations.
Read Article


