
Who Gets to Regulate AI? The Federal vs. State Showdown in America
Who Gets to Regulate AI? The Federal vs. State Showdown in America
The conversation around artificial intelligence (AI) has evolved beyond just advancements in technology; it’s now heavily intertwined with questions of power and authority. Who has the authority to draft the regulations that impact markets, elections, jobs, media, and safety? As of late 2025, the United States is embroiled in a critical debate between federal authorities and state governments over who should regulate AI and the extent of such regulations. The outcome will determine whether businesses must adhere to a single national framework or navigate a diverse array of state laws, and it will impact whether consumers receive timely protections.
This guide breaks down this pivotal confrontation, clarifying the stakes for both businesses and individuals, while providing a practical roadmap to navigate the unfolding situation.
TL;DR
- Federal Moves: Congress and the White House are considering actions to limit or override state AI regulations. Several proposals are being discussed, including those tied to defense and budget legislation, alongside a leaked draft executive order. States and various lawmakers are fiercely responding to these attempts.
- State Initiatives: States have taken the initiative; California has enacted SB 53, Colorado passed a comprehensive AI act, Texas established a more focused framework, and New York’s RAISE Act has made it through the legislature and awaits final approval.
- Bipartisan Resistance: A proposal to freeze state AI laws for a decade was overwhelmingly rejected in the Senate (99-1), signifying strong bipartisan opposition to overriding state laws without an established federal standard.
- Industry Concerns: Major tech companies and pro-AI political groups argue that having 50 different regulations could hinder innovation and reduce U.S. global competitiveness. Their advocacy efforts are substantial and increasing.
How We Arrived Here
In 2023-2024, federal actions regarding AI mostly revolved around hearings, frameworks, and voluntary commitments. Meanwhile, state legislators began passing focused measures aimed at issues like deepfakes, job discrimination, and transparency. Notably, Colorado enacted the first comprehensive AI law in the U.S. concentrating on high-risk systems. After a veto in 2024, California returned in 2025 with SB 53, which emphasizes transparency, safety incident reporting, and whistleblower protections for companies developing cutting-edge AI.
State Laws in Action
- Colorado AI Act (SB24-205): This law mandates developers of high-risk AI systems to proactively address algorithmic discrimination, conduct impact assessments, and disclose AI usage in consumer interactions. Its effective date has been moved to June 30, 2026, allowing for both industry and regulatory preparation.
- California SB 53: This regulation requires large AI developers to make public safety disclosures, implement mechanisms for reporting vital safety incidents, and offers whistleblower protection. It also supports CalCompute, a public computing initiative.
- Texas Responsible AI Governance Act (TRAIGA): This focused law prohibits certain harmful AI applications, establishes a statewide sandbox for testing, consolidates enforcement under the Attorney General’s office, and overrides local AI regulations. It is set to take effect on January 1, 2026.
- New York’s RAISE Act: This act mandates that major AI developers maintain safety plans for significant risks and report serious incidents, with enforcement led by the Attorney General. Its passage awaits the governor’s approval.
These measures, while varied, collectively lay the groundwork for a regulatory framework around AI in America focusing on risk management, transparency, incident reporting, and targeted restrictions on disinformation and bias.
Washington’s Response
As of Winter 2025, two major issues are shaping the federal landscape:
-
Defense and Budget Legislation: Congressional leaders are contemplating integrating AI preemption measures into must-pass bills, reigniting discussions on restricting state laws. Following a Senate vote in July to eliminate a proposed moratorium on state AI regulations (99-1), it became clear that Congress is cautious about limiting state action without proposing a significant federal alternative.
-
Leaked Executive Order: Reports indicate that the White House is considering an executive order aimed at overriding state regulations, which may involve establishing an AI Litigation Task Force at the Department of Justice and nudging agencies like the FCC and FTC to develop standardized federal guidelines. Such actions could trigger legal battles over states’ rights.
In tandem, the federal government has launched the Genesis Mission, a prominent initiative led by the Department of Energy aimed at enhancing scientific advancement through AI, reinforcing the argument for a coordinated national approach.
Controversy Surrounding Preemption
- Consumer Vulnerabilities: The lack of comprehensive federal AI safety legislation means that overriding state laws could leave significant gaps in protections against issues such as deepfakes, fraud, and discrimination. This concern has led attorneys general from various states to urge Congress against blocking state initiatives.
- State Leadership: In the absence of federal regulations, states have proactively constructed various protective measures. California’s focus on transparency, Colorado’s attention to discrimination, and Texas’s sandbox approach illustrate this diverse landscape. A sweeping federal override would eliminate these trailblazing efforts.
- Legislative Hesitance: The Senate’s overwhelming 99-1 decision against the moratorium further highlighted the sentiment that Congress should not stifle state protections without a robust national framework.
Industry Perspective: The Call for a Unified Standard
Key players in the AI industry and major investors are voicing concerns that differing regulations across 50 states would complicate compliance and slow down innovation, particularly for emerging startups. New political action committees (PACs) and advocacy groups advocating for a unified national standard have emerged, with entities like Leading the Future reporting over $100 million in fundraising to support pro-AI candidates and challenge stricter state-level regulations.
Even if one may debate the motivations behind this push, it is undeniable that organized, well-funded efforts advocating for a singular federal framework are now a significant influence on AI policy.
A Potential Federal Framework
The bipartisan House AI Task Force, chaired by Representatives Ted Lieu and Jay Obernolte, is working on ideas that could eventually develop into a package of bills encompassing safety for children, transparency requirements, and protocols for high-risk AI deployments. However, any comprehensive legislation will take time to materialize.
In the interim, Congress advanced the narrower TAKE IT DOWN Act, which addresses non-consensual deepfakes, demonstrating that targeted bipartisan initiatives can still proceed even as broader frameworks are still in development.
An effective national framework that respects state rights might include:
- Baseline Safety Standards: Establishing safety and transparency guidelines for developers relevant to NIST’s AI Risk Management Framework and aligning with international best practices.
- Incident Reporting Protocols: Procedures to report critical AI failures or misuse with clearly defined criteria and deadlines.
- Targeted Regulations: Specific rules applicable to high-stakes AI in areas such as housing, employment, and public services, along with rights for appeals against automated decisions.
- Disclosure on AI-generated Content: Clear requirements for disclosing AI-generated political material and synthetic media during election campaigns.
- Risk Capability Guardrails: Safeguarding particularly sensitive capabilities, like autonomous cyber operations, while ensuring good-faith compliance is protected.
- Federal-State Enforcement Collaboration: Preserving state attorney general authority in areas where federal guidelines are minimal.
State Models to Observe
- California’s SB 53: Emphasizes transparency by requiring safety plan disclosures and reporting of critical incidents. It also supports the CalCompute initiative designed to enhance access to compute resources for research.
- Colorado’s AI Act: Establishes a risk-based framework governing high-risk AI applications, with developer obligations and a prominent role for the Attorney General. Implementation timelines have been extended to mid-2026 to refine guidelines.
- Texas TRAIGA: Bans harmful uses such as unlawful deepfakes and discrimination, sets up a statewide sandbox, consolidates enforcement authority, and overrides local ordinances. It becomes effective on January 1, 2026.
- New York’s RAISE Act: Mandates that major developers maintain safety plans and report serious incidents, empowering the Attorney General to impose civil penalties, pending gubernatorial approval.
While these state-level approaches vary in details, they converge around fundamental principles of risk management, transparency, incident reporting, and targeted restrictions. The differences arise in the extent, thresholds, and enforcement methods employed.
Understanding the NDAA and Executive Order Implications
- NDAA Connection: The National Defense Authorization Act often serves as a vehicle for unrelated policy negotiations. Legislative proposals to curb state AI laws have resurfaced in 2025, particularly after the Senate’s significant decision to strip a decade-long moratorium.
- Details on the Executive Order: Reports concerning a potential draft executive order revealed intentions to challenge state laws while pushing for uniform national standards, a move that’s likely to face legal challenges in courts.
Implications for Companies in 2026 Planning
As Congress works towards a federal standard, businesses should prepare for multifaceted compliance across jurisdictions. Here’s a practical checklist:
- Assess Your AI Ecosystem:
- Create an inventory of models and tools categorized by their use cases and jurisdictions.
-
Highlight high-stakes decisions such as those impacting hiring, lending, housing, and education.
-
Develop a Risk Management Program:
- Align with NIST AI RMF and ISO/IEC 23894 standards for governance, documentation, human oversight, and model monitoring.
-
Conduct pre-deployment as well as periodic bias and robustness tests for critical applications.
-
Ensure Proper Disclosures and Consent:
- Prepare clear consumer notices where required.
-
Disclose interactions involving AI when engaging with end users, adhering to state rules like Colorado’s.
-
Get Ready for Incident Reporting:
-
Define what qualifies as a “critical safety incident” within your business context and practice workflows in line with SB 53 requirements.
-
Enhance Data Governance:
- Monitor data provenance, retention, and access protocols related to training and fine-tuning.
-
Document handling of sensitive and copyrighted data according to your risk management strategy.
-
Plan for Political Elections and Synthetic Media:
-
Establish internal protocols around political deepfakes and watermarking where possible.
-
Stay Informed on Policy Developments:
- Keep an eye on decisions regarding New York’s RAISE Act and Texas’s sandbox regulations.
- Track any federal developments related to preemption and House AI Task Force initiatives.
The Political Landscape
States are actively pursuing their interests. A bipartisan group of attorneys general has urged Congress not to block state laws, emphasizing the necessity for regulation in light of significant risks associated with unregulated AI. Political dynamics are multifaceted: civil libertarians, child safety advocates, and state-rights proponents often find common ground against harsh federal preemption, while some in national security and tech advocacy support a unified federal approach.
Meanwhile, well-funded PACs and advocacy groups within the industry are reshaping the debate, asserting that a fragmented regulatory landscape could hinder competition and innovation compared to global competitors. Regardless of perspective, their influence is already affecting electoral races and legislative outcomes across statehouses.
Three Possible Paths Forward for 2026
- Federal Framework with Limited Preemption: Congress may establish safety and transparency standards that only preempt direct conflicts, preserving avenues for state innovation while introducing national incident reporting and disclosure protocols.
- Soft Preemption via Agency Actions: Federal agencies, spurred by executive action, could set standards that unintentionally supersede state regulations in specific areas, likely leading to swift legal contests.
- Enduring State-led Models: If federal attempts falter, states might refine their laws independently, leading companies to navigate a patchwork of regulations similar to data privacy. States like Colorado and California could continue to innovate, while Texas iterates its sandbox model and New York moves forward on the RAISE Act.
Implications for Citizens
Every detail counts. Disclosures and incident reports may seem technical, but they are essential mechanisms that help identify risks before they escalate. State attorneys general can respond more swiftly to local issues than federal entities. Conversely, a coherent federal standard offers clarity and consistency, especially for companies operating nationally. The ideal outcome would be a federal baseline that raises the standard while still allowing states to implement specific protections.
Conclusion
The race to regulate AI is no longer a question of “if” but rather “who” and “how.” Washington seeks to impose national guidelines, while states advocate for the autonomy to safeguard their residents. The Senate’s resounding 99-1 vote against a moratorium elucidates the prevailing sentiment that states should not be sidelined without a solid federal plan. Until a comprehensive national framework is established, expect a continually evolving landscape of state regulations, heightened industry lobbying for a uniform standard, and companies strengthening their AI governance practices to ensure compliance across the board.
Prioritize enduring practices such as risk management, thorough documentation, human oversight, robust testing, and transparency. These principles will remain relevant, irrespective of the outcome in this jurisdictional tug-of-war.
FAQs
1) What is the connection between the NDAA and AI regulation?
The National Defense Authorization Act is crucial to pass, and legislators often add unrelated policy riders to it. In 2025, interest in inserting language to limit state AI regulations arose, sparking renewed discussions following the Senate’s July decision to remove a similar moratorium.
2) Did the White House genuinely consider opposing states’ AI laws?
Reports from major news outlets disclosed a draft executive order that would establish a DOJ task force to contest state AI laws and encourage federal agencies like the FCC and FTC to develop standards that would supersede state regulations. This draft raised significant controversy and was reported to be on hold.
3) Which state AI laws are currently in effect or on the horizon?
California’s SB 53 is now law, facilitating transparency and incident reporting for premier developers. Colorado’s AI Act takes effect in 2026 after an extension, while Texas’s TRAIGA is set to start on January 1, 2026. New York’s RAISE Act awaits the governor’s approval after legislative passage.
4) Is there any federal AI legislation that has been enacted?
Congress has passed narrower measures such as the TAKE IT DOWN Act targeting non-consensual deepfakes. Comprehensive standards for AI safety or transparency are still forthcoming.
5) How are companies preparing amid this uncertainty?
Businesses are aligning with NIST’s risk management protocols, conducting bias and robustness testing for significant applications, establishing incident reporting procedures, tightening data governance, and preparing necessary consumer disclosures in compliance with state laws. Companies that operate nationwide are adopting strategies that account for the most stringent overlapping requirements.
Thank You for Reading this Blog and See You Soon! 🙏 👋
Let's connect 🚀
Latest Insights
Deep dives into AI, Engineering, and the Future of Tech.

Who Gets to Regulate AI? The Federal vs. State Showdown in America
Who sets the rules for AI: Washington or the states? Inside the 2025 fight over AI regulation, from NDAA preemption and a leaked EO to California SB 53 and Colorado’s law.
Read Article


