A bold shift: President Trump signs an executive order to stop states from enforcing their own AI rules.
The move, announced in the Oval Office, aims to create a single central pathway for AI governance in the United States. Trump stated that the administration wants one unified approval process, while White House AI adviser David Sacks emphasized that the measure would empower federal officials to curb the most burdensome state regulations, though he said regulations protecting children’s safety would not be blocked.
Supporters, including major tech leaders, view this as a major step toward national AI legislation that could accelerate the U.S. lead in a rapidly evolving sector. Industry leaders have warned that state-by-state rules might slow innovation and hinder America’s competition with China, with substantial investments flowing into AI from private firms.
Responses from the tech industry were sought from OpenAI, Google, Meta, and Anthropic, though no comment was published at press time.
Opposition to the executive order is significant. California, home to several global tech giants, already enforces its own AI safeguards. California Governor Gavin Newsom, an outspoken critic of Trump, condemned the move, accusing the president of pursuing personal and political gain. In his view, the order seeks to override state laws designed to shield the public from unregulated AI technology.
Earlier this year, Newsom signed legislation requiring the largest AI developers to disclose risk-management plans for their models. Other states, including Colorado and New York, have enacted AI-related regulations as well. Newsom has framed California’s policies as a benchmark that U.S. lawmakers could follow.
Critics of the federal action argue that state safeguards remain essential in the absence of strong national rules. Advocates from Mothers Against Media Addiction, among others, argue that stripping states of their authority to implement protections weakens residents’ rights to tangible guardrails against AI-related risks.
Bottom line: this is a defining clash over how the United States should balance innovation with safety, federal leadership with state autonomy, and national strategy with local safeguards. What should take precedence in America’s AI future: a unified federal standard or robust state-level protections that reflect local needs? Share your take in the comments.