Collage of a GPU data center, a classroom using a chatbot, and a spiral galaxy representing AI-powered simulations
ArticleNovember 23, 2025

AI Weekender: Nvidia’s Record Earnings, Regulatory Showdowns, Classroom Chatbots, and a Milky Way Created by AI

CN
@Zakariae BEN ALLALCreated on Sun Nov 23 2025

Note: This news roundup reflects events reported on November 22, 2025. Where helpful, we link to primary releases and high-quality coverage for verification.

Introduction

AI didn’t take the weekend off. In recent days, Nvidia posted yet another blockbuster quarter, Washington and Brussels faced off over who controls AI regulation, Greece approved a national pilot for classroom chatbots, and researchers used AI to simulate the Milky Way at a star-by-star level. Here’s what these developments mean and why they matter.

1) Money and Machines: Nvidia’s Record Quarter and a Shifting Supply Chain

Nvidia’s Earnings Keep the AI Momentum Going

  • Nvidia reported $57.0 billion in revenue for Q3 FY2026, marking a 22% increase from Q2 and a staggering 62% year-over-year growth. Data center sales alone reached $51.2 billion, with guidance suggesting $65.0 billion for the upcoming quarter. Demand for Blackwell-class chips remains robust.
  • Coverage in both tech and business media echoed this sentiment, dismissing “AI bubble” concerns even as investors debate whether such growth is sustainable.

Why It Matters: The AI stack is becoming critical infrastructure for cloud services, enterprises, and research institutions. Nvidia’s results send a clear message that training and inference at scale continue to experience hyper-growth.

OpenAI + Foxconn: Partnering on US-Made AI Data Center Gear

  • OpenAI and Hon Hai (Foxconn) announced a collaboration to design multiple generations of AI data-center racks and manufacture key components in US facilities, including cabling, networking, cooling, and power. This agreement provides OpenAI with early evaluation options without any purchase commitment.
  • Production will utilize facilities in Wisconsin, Ohio, and Texas as part of an effort to localize some parts of the AI supply chain.

Why It Matters: If the US-based manufacturing of racks and cooling technology advances, hyperscalers and AI labs will be able to scale up capacity more efficiently and minimize cross-border supply chain challenges.

Huawei’s Flex:ai Optimizes Existing Accelerators

  • Huawei introduced Flex:ai, a Kubernetes-based orchestration layer that divides a single GPU/NPU into numerous virtual compute units and pools idle accelerators across various nodes. The company claims an average utilization improvement of around 30%, pending independent validation.

Why It Matters: With high-end GPUs in short supply, improved scheduling and resource allocation can stretch existing capacity and reduce costs for multi-model research and inference.

The Energy and Materials Reality Behind AI’s Scaling Trends

  • The IEA forecasts that global data-center electricity consumption will more than double by 2030, reaching approximately 945 TWh, primarily driven by AI demand. This means sourcing materials like gallium and copper will become essential as the AI build-out accelerates.
  • Analysts warn that AI data centers are amplifying the demand for copper, needed for power distribution, busbars, and cooling systems.

Takeaway: It’s not just about more chips in 2026. Companies will also need to consider energy requirements, metal sourcing, domestic manufacturing, and software innovations to maximize the efficiency of existing accelerators.

2) Rulemaking Firefights: US Preemption Talks, OECD Reporting, EU Delays

Federal Preemption Faces Resistance

  • Reports indicate the White House has drafted an executive order aimed at directing the Department of Justice to sue states that enact their own AI regulations, arguing for a unified federal standard. This initiative has sparked significant legal and political challenges.
  • Earlier in 2025, the administration framed its AI policies around removing perceived barriers to US leadership while signaling a preference for federal oversight that is less burdensome.

What to Watch: If preemption is successfully enacted, state laws on issues like deepfakes, automated decision-making, and protections for youth could face legal challenges, shaping the future landscape of AI governance.

OECD’s G7 Hiroshima AI Reporting Framework Gains Momentum

  • The OECD has launched a global reporting framework to track the adoption of the G7 Hiroshima AI Process Code of Conduct, with major players such as Amazon, Anthropic, Google, Microsoft, and OpenAI committing to submit inaugural reports in 2025. A recent update highlighted increasing transparency, although the adoption of provenance tools remains inconsistent.

EU Delays Enforcement of High-Risk AI Regulations to 2027

  • The European Commission has proposed postponing the enforcement of significant “high-risk” provisions of the AI Act from August 2026 to December 2027 as part of a “Digital Omnibus.” While officials frame it as a simplification, critics view it as a retreat under pressure from tech companies and allies.

Bottom Line: 2025 ends with a fragmented governance landscape. The US is grappling with preemption, the OECD is pushing for voluntary transparency, and the EU is lagging in enforcement. Companies should prepare for frictions in cross-border operations involving foundation models.

3) Classrooms Test Chatbots: Pilots, Promise, and Caution

Greece Trains Teachers and Pilots ChatGPT Edu in Schools

  • Greece is set to train staff at 20 secondary schools on a customized version of ChatGPT Edu, with a nationwide rollout slated for January and monitored student access planned for Spring 2026. Proponents see this as practical preparation for an AI-first economy, while unions and student groups express concerns regarding creativity, screen time, and infrastructure gaps.

Research on Education Technology: Guardrails and Evidence

  • UNESCO’s guidance recommends that nations regulate generative AI in educational settings, establish a minimum age, and focus on teacher training and human-centered design.
  • According to OECD analyses, AI tools have shown rapid improvements in reading and science performance, emphasizing the need to rethink assessment methods rather than outright banning such tools.
  • Initial studies of AI tutors reveal potential benefits in student engagement and workflow support, though mixed results on actual learning gains and ethical implications persist, especially without strong oversight.

Practical Takeaway for Schools in 2026: Focus on establishing clear usage policies, prioritizing teacher professional development, ensuring robust age-appropriate safeguards, and effectively measuring educational outcomes instead of imposing blanket bans.

4) Healthcare’s AI Tug-of-War: Insurers and Patients Automate

Regulatory Scrutiny and Automation in Healthcare

  • Courts and regulators are examining the automation of algorithm-driven denials (e.g., Cigna’s PxDx), prompting providers and vendors to create their own tools for quicker, evidence-based appeals, such as Waystar’s automated appeal letter drafting.
  • Reports have identified how automation in prior authorization and post-acute care decisions can inflate denial rates, often reversed upon appeal, leading to proposals at the state level aimed at limiting AI use or ensuring transparency.
  • Grassroots initiatives are emerging, with startups and nonprofits offering free AI assistants to help patients understand policy language and craft customized appeals.

Striving for Safer AI Assistants

  • Brown University has launched a five-year, $20 million NSF institute (ARIA) to research trustworthy, context-aware AI assistants for mental and behavioral health, with a kickoff event held on November 20-21. Researchers have verified that one in eight US teens and young adults seek mental health guidance from chatbots, highlighting a pressing need for protective measures.
  • Another Brown review of recent studies found recurrent ethical issues in mental health chatbots, such as poor crisis management, misleading empathy, and bias, underscoring the necessity for human oversight.

5) At the Frontier: AI Constructs a Galaxy, Accelerates Cosmology, and Develops Physical AI

A Star-by-Star Simulation of the Milky Way

  • A RIKEN-led team simulated the Milky Way with over 100 billion individually modeled stars by integrating a physics code with a deep-learning surrogate trained on supernova dynamics. This innovative method dramatically reduced simulations from decades to mere months.
  • Independent reports highlight that this surrogate-physics model could also expedite multi-scale research in areas such as climate science and oceanography.

Advancements in Cosmology

  • Researchers are developing AI emulators to expedite parameter inference, covering topics from void statistics to power spectrum evaluations that maintain sub-percent accuracy while avoiding black-box behavior.

Physical AI and Robotics Transition from Prototype to Practicality

  • Nvidia’s Omniverse libraries and Cosmos world models are intended to generate synthetic datasets and create high-fidelity digital twins for robotic training; early adopters span robotics and autonomous vehicles.
  • New hardware roadmaps and supercomputing partnerships suggest a future wherein high-precision scientific workloads and AI training share common infrastructures, applicable from quantum device simulations to next-generation national lab systems.

6) The Big Picture: Key Signals from This Week

  • AI has become critical infrastructure. Nvidia’s financial results and new US-based hardware manufacturing initiatives underscore the importance of compute power in digital competitiveness.
  • Governance is complex. The US is exploring preemption while the EU faces delays in enforcement. The OECD is working to establish a connective framework through transparency reporting.
  • Classroom and healthcare AI adoption will depend on evidence-based practices and strong safety measures. As research expands, necessary policies and practices must evolve quickly.
  • In the scientific realm, AI is already redefining computational capabilities, with simulations and emulators reshaping what is achievable.

Practical Takeaways for Leaders

  • Plan capacity comprehensively. Address needs across the entire stack: chips, energy, cooling, and copper. Consider utilization software as a central focus, not a secondary concern.
  • Establish a governance framework. Monitor state legislation, EU timelines, and OECD reporting to prepare for disclosures and provenance expectations in 2026.
  • For educational and healthcare systems: Pair pilot projects with clear key performance indicators and safeguards, ensuring that humans remain involved in critical decisions.

FAQs

Q1: Did Nvidia’s latest results surpass expectations?
A1: Yes, for the quarter ending October 26, 2025, Nvidia reported $57.0 billion in revenue ($51.2 billion from data centers) and projected $65 billion for the next quarter, indicating continued strength in AI infrastructure.

Q2: Is the EU’s AI Act being rolled back?
A2: No rollback has occurred, but enforcement for certain high-risk provisions will be delayed until December 2027. Officials present this as simplification, while critics fear erosion of regulatory strength.

Q3: Do classroom chatbots help students learn?
A3: Evidence remains mixed. While they may enhance engagement and relieve teacher burdens, substantial long-term learning gains are still under evaluation. UNESCO advocates for age limits and proper teacher training; OECD research highlights ongoing assessment challenges.

Q4: What can patients do if insurers use AI to deny care?
A4: Providers and startups are rolling out automated tools for appeals that cite policy text and clinical evidence. Legal actions and state initiatives are demanding greater oversight and transparency.

Q5: What’s innovative about the Milky Way simulation?
A5: Researchers utilized a deep-learning surrogate for supernova feedback, allowing the modeling of over 100 billion stars efficiently. This approach could serve as a model for accelerating multi-scale simulations in other fields.

Conclusion

From financial reports to classroom initiatives and cosmic explorations, AI’s impact is growing. The week underscored an essential pattern: computational resources are expanding, policy frameworks are evolving, educational institutions are experimenting, and scientific practices are accelerating. Leaders looking to thrive in 2026 must not only invest in GPUs; they should prioritize securing power and materials, enhancing utilization and safety measures, and navigating the regulatory currents that determine where and how AI can be deployed.

Thank You for Reading this Blog and See You Soon! 🙏 👋

Let's connect 🚀

Share this article

Stay Ahead of the Curve

Join our community of innovators. Get the latest AI insights, tutorials, and future-tech updates delivered directly to your inbox.

By subscribing you accept our Terms and Privacy Policy.