
Are We Ready For AGI? DeepMind Warns No, OpenAI Sees A Gentler Landing
Are We Ready For AGI? DeepMind Warns No, OpenAI Sees A Gentler Landing
Artificial general intelligence, or AGI, is the AI milestone that could perform a wide range of tasks at or beyond human levels. Depending on whom you ask, it is either a near-term upheaval we are not ready for or a technology that will integrate more gradually than many headlines suggest. Recent comments from Google DeepMind and OpenAI highlight that split-screen view.
The headline debate, in two sentences
- Google DeepMind CEO Demis Hassabis and other safety leaders say society is not prepared for AGI-scale systems and call for strong guardrails and institutions to manage risks and misuse.
- OpenAI leaders, including CEO Sam Altman, have argued that the near-term impact on jobs and daily life is likely to be more gradual than catastrophic, giving society time to adapt.
Both views can be true at once: the technology might arrive on a smooth economic curve while still introducing concentrated risks that need planning. Below, we unpack what each side is saying and what the evidence shows.
First, a quick primer: what counts as AGI?
There is no single, universally accepted definition. A common working idea is that AGI would be a system capable of performing a wide range of economically useful tasks at or above human performance, including reasoning, planning, and learning across domains. This is broader than today’s models, which are powerful but still narrow in reliability, tool use, and autonomy.
DeepMind’s warning: powerful AI needs stronger institutions
Demis Hassabis has repeatedly cautioned that AGI-level systems could introduce risks that exceed existing governance and safety practices. His concern is not just about long-run science fiction, but practical issues like misuse, model autonomy, and the concentration of compute and capabilities.
- In 2023, dozens of AI leaders, including Hassabis, signed a one-sentence statement that mitigating AI extinction risk should be a global priority, underscoring the need for serious preparation (BBC).
- Governments have echoed these concerns. The 2023 UK AI Safety Summit produced the Bletchley Declaration, recognizing that frontier AI poses serious risks that require international cooperation (UK Government).
- Regulators are moving: the EU AI Act establishes risk-based rules for high-risk and general-purpose models (European Commission), and the US issued an executive order directing safety testing, reporting, and secure model development for powerful systems (White House).
Under this view, being unprepared means we lack robust safety evaluations, incident reporting, and accountability for the most capable systems. It also means our social systems (education, labor policy, cybersecurity, media literacy) are not yet sized for the speed and scale at which these systems can be deployed.
OpenAI’s take: change may be significant, but more gradual than doomsayers expect
OpenAI leaders have often emphasized the transformative potential of advanced AI while also suggesting that the near-term disruption could be more incremental than catastrophic. At the 2024 World Economic Forum, OpenAI CEO Sam Altman argued that AI will change work, but broad job losses are not inevitable and society will have time to adapt as capabilities improve in steps rather than all at once (CNBC; Reuters).
Several strands of research support a tempered view of short-run disruption:
- Exposure does not equal replacement. A 2023 paper by OpenAI and collaborators found that many jobs have tasks that are exposed to large language models, but exposure largely implies task-level augmentation, not full automation (Eloundou et al., 2023).
- Early productivity results are positive but bounded. Controlled studies show meaningful productivity gains for certain knowledge tasks (for example, customer support and writing), typically in the 10-40% range, with the biggest improvements for less-experienced workers (Noy & Zhang, 2023).
- Macro effects take time. The IMF estimates that while AI will touch most jobs in advanced economies, the net impact depends on adoption, complementary skills, and policy responses, suggesting a ramp rather than a cliff (IMF).
This perspective does not dismiss risks; it argues the timeline of disruption may be more manageable than sudden-shock narratives imply.
So, who is right? Probably both, depending on the lens
These positions emphasize different slices of the same reality:
- Risk lens: A small number of highly capable systems, widely deployed, can create outsized safety risks even if the average workplace impact is gradual. That justifies safety investments, evaluations, and governance scaled to worst-case failures.
- Economic lens: Displacement and productivity gains diffuse over years. Most sectors adopt in waves, constrained by integration costs, regulation, and complementary skills, which favors a gradual adjustment.
Evidence so far fits the gradualism story on productivity and jobs but supports the cautionary view on safety and governance readiness. For example, national and independent labs are only now building standardized test harnesses for dangerous capabilities, jailbreak resistance, and model autonomy (UK AI Safety Institute). Meanwhile, leading labs describe plausible pathways to systemic risks from frontier models if unmanaged (Anthropic).
What being prepared for AGI actually looks like
1) Safety and evaluations
- Independent testing for dangerous capabilities (e.g., autonomous replication, biothreat assistance, critical infrastructure access).
- Standardized red-teaming, incident reporting, and post-incident learning, aligned to frameworks like NIST’s AI Risk Management Framework (NIST AI RMF).
2) Governance and accountability
- Risk-tiered rules for general-purpose and frontier systems (e.g., EU AI Act) with clear responsibilities for developers and deployers.
- Compute transparency and safety reporting for training runs above certain capability thresholds (as outlined in the US AI Executive Order).
3) Social readiness
- Education and workforce programs that focus on AI-complementary skills (reasoning, domain expertise, tool use, oversight).
- Upgraded cybersecurity and content authentication to handle scaled misinformation and automated intrusion attempts.
What you can do now
- For leaders: Pilot safely, measure productivity, and pair deployments with training and change management. Define red lines for use and incident response before rollout.
- For policymakers: Fund independent evaluations, align with emerging international standards, and create rapid feedback loops with labs and infrastructure providers.
- For professionals: Learn to delegate tasks to AI tools, but keep a human-in-the-loop for verification. Track domain-specific guidance from regulators and professional bodies.
Bottom line
DeepMind’s warning and OpenAI’s gradualist view are not mutually exclusive. The economy may adapt in steps, even as the tail risks of a few highly capable systems demand serious, coordinated preparation. If we take safety, governance, and social readiness seriously now, we can capture the upside of advanced AI while reducing the downside.
FAQs
What is AGI, exactly?
AGI refers to AI systems that can perform a broad range of tasks at or above human level, including reasoning, planning, and learning across domains. It is not a single benchmark but a capabilities threshold that would change how we use and govern AI.
Is AGI close?
Timelines vary widely among experts. Some believe major leaps could arrive this decade; others expect longer. Regardless of timing, work on safety evaluations and governance is worth doing now because capability progress has been steady.
Will AGI eliminate most jobs?
Unlikely in the short run. Research suggests many jobs have tasks that AI can assist or accelerate, but full job automation is uncommon. Net effects depend on adoption rates, complementary skills, and policy support.
What are the biggest risks?
Misuse at scale (fraud, disinformation), model autonomy without adequate guardrails, and assistance with dangerous activities. At higher capability levels, systemic risks could include destabilizing cyber or bio threats if not carefully managed.
What policies matter most right now?
Risk-tiered rules for frontier systems, independent safety testing, incident reporting, compute transparency for large training runs, and investment in workforce transition and education.
Sources
- BBC – AI could be as bad as pandemics or nuclear war, say industry leaders (2023)
- UK Government – Bletchley Declaration on AI Safety (2023)
- European Commission – The EU AI Act (2024)
- White House – Executive Order on Safe, Secure, and Trustworthy AI (2023)
- CNBC – Sam Altman says AI won’t eliminate most jobs (2024)
- Reuters – OpenAI’s Sam Altman says AI will not wipe out jobs (2024)
- Eloundou et al. (2023) – GPTs are GPTs: An Early Look at the Labor Market Impact of LLMs
- Noy & Zhang (2023) – Experimental Evidence on the Productivity Effects of Generative AI (Science)
- IMF – GenAI and Jobs: Implications for the Future of Work (2024)
- UK AI Safety Institute – Introducing our evaluations platform (2024)
- Anthropic – Managing AI Risks of Frontier Models (2023)
- NIST – AI Risk Management Framework 1.0 (2023)
Thank You for Reading this Blog and See You Soon! 🙏 👋
Let's connect 🚀
Latest Insights
Deep dives into AI, Engineering, and the Future of Tech.

I Tried 5 AI Browsers So You Don’t Have To: Here’s What Actually Works in 2025
I explored 5 AI browsers—Chrome Gemini, Edge Copilot, ChatGPT Atlas, Comet, and Dia—to find out what works. Here are insights, advantages, and safety recommendations.
Read Article


