Civilians around the world daily and easily engage with artificial intelligence, communicating with chatbot “therapists” and “friends” or creating realistic videos with entirely machine-generated content.
Governments, meanwhile, are racing to keep up with the implications of AI – positive and otherwise – for national security and economic competitiveness as well as for citizen freedoms, privacy, and safety. The challenge centers on whether and how much to regulate this rapidly advancing and lucrative sector. And how to do so without eroding the democratic, free-market values of individual and entrepreneurial autonomy.
Australia is now the first country to ban social media use for children under age 16. In July, the United Kingdom enacted age verification for accessing pornographic sites. And last year, the European Union passed an AI Act to “foster responsible” development, while addressing “potential risks to citizens’ health, safety, and fundamental rights.”
“Good rules … help prevent disasters,” policy analyst James Lardner noted in a study of 10 landmark regulations in the United States. They arise “out of crisis and struggle, but also … out of the momentum of accomplishment,” and can channel market forces “in more positive directions,” he observed.
U.S. governors and state legislatures have been busy trying to design such AI rules: In 2025, all 50 states introduced AI bills and 38 passed roughly 100 laws. These moves far outpaced action in Washington, where the House of Representatives and President Donald Trump pushed for a 10-year moratorium on state AI laws. The Senate voted this down 99-1 in July, and a November effort to include the provision in the defense bill also failed.
Mr. Trump has announced he will issue an executive order to preempt or override state rules. “You can’t expect a company to get 50 Approvals every time they want to do something,” he posted on Truth Social. There is some validity to this, as business associations and tech industry leaders point out.
Others, such as Florida Gov. Ron DeSantis, see this as “federal government overreach,” though. “Stripping states of jurisdiction to regulate AI is a subsidy to Big Tech” and hampers efforts to protect children and intellectual property, he wrote on the social platform X.
History shows that state regulations often serve as initial guardrails and provide a template for comprehensive federal legislation.
“State-level action has played a significant role in addressing early risks associated with emerging technologies,” according to George Washington University researcher Tambudzai Gundani. “Because these tools are deployed in specific communities, local officials are often the first to hear complaints, see patterns of harm, and respond.”
For officials to effectively regulate and devise “good rules” for AI, it seems that a willingness to learn from local experience and listen to industry and federal concerns will both be necessary.











