In the realm of data protection, the United States has long been a patchwork of sector-specific laws and state-led initiatives. Despite repeated federal attempts, the United States still lacks a comprehensive data privacy framework. To fill the void left by the inaction of the federal government, the individual states started to act. Currently, there are 20 states that have comprehensive data privacy laws and 6 states that have narrow consumer privacy bills.
The rapid advancement of artificial intelligence (AI) has sparked not only innovation but also deep concern about how, and by whom, it should be regulated. With no comprehensive federal privacy or AI law in place, states have increasingly stepped into the regulatory void. California, Colorado and Utah have already taken action.
Attorneys General and State Legislators Assert the Autonomy of State Legislative Authority
Many states are now looking to the Colorado AI Act (2024) as a template for their own states’ AI regulations. But that may soon change. A new federal proposal seeks to preempt state action—raising profound questions about federalism, innovation, and consumer protection.
However, on April 28, 2025, the One Big Beautiful Bill Act—a broad budget reconciliation bill—was introduced in the House of Representatives. The bill introduces a moratorium, imposing a 10 year-prohibition on states from enforcing any state law or regulation addressing AI and automated decision-making systems. If passed it will prevent state governments from establishing any AI regulations.
The proposed federal moratorium has drawn fierce opposition. On May 16, a bipartisan coalition of 40 state Attorneys General sent a letter to Congress, expressing the concerns of the Attorneys General:
“This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI. Moreover, this bill purports to wipe away any state-level frameworks already in place.
The impact of such a broad moratorium would be sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI. This bill will affect hundreds of existing and pending state laws passed and considered by both Republican and Democratic state legislatures. Some existing laws have been on the books for many years.
Perhaps most notably, of the twenty states that have enacted comprehensive data privacy legislation, the overwhelming majority included provisions that give consumers the right to opt out of specific kinds of consequential, automated decision-making and require risk assessments before a business can use high-risk automated profiling.
To the extent Congress is truly willing and able to wrestle with the opportunities and challenges raised by the emergence of AI, we stand ready to work with you and welcome federal partnership along the lines recommended earlier.”
Existing State Privacy and AI Laws at Risk
Joining the Attorneys General was also the US bipartisan and bicameral coalition of over 30 California state legislators, stating:
“As representatives of the home of 32 of the world’s 50 leading AI companies, we share the Committee’s goals of promoting innovation and preserving the United States’ role as leader in this space. However, the proposed moratorium – which bears no relationship to the budget –jeopardizes the safety and rights of American citizens, fails to uphold the United States’ legacy of fostering innovation through responsible regulation, and undermines state sovereignty.
We recognize the undesirability of a ‘patchwork’ of disparate state regulations and we support targeted, smart regulation of highrisk artificial intelligence systems at the federal level.
We urge you to pursue a collaborative approach, where federal and state governments craft a robust AI regulatory framework. In the meantime, states can continue to serve their traditional roles as ‘laboratories for devising solutions to difficult legal problems’”
State Resistance Fails to Stop House Vote
Ignoring the pleas from both Republican and Democratic Attorneys General, on May 22, 2025, the U.S. House of Representatives voted to pass House Resolution 1, titled the One Big Beautiful Bill Act. The vote was 215-214-1 primarily along party lines. Having passed through the House of Representatives the bill will now head to the Senate.
The One Big Beautiful Bill Act will have other hurdles to get over in the Senate. A Senate rule called the Byrd Rule is a procedural mechanism in the Senate that prohibits the inclusion of extraneous provisions—those unrelated to spending or revenue—in budget reconciliation bills. Democrats argue that the AI moratorium provision is extraneous and therefore violates this rule. This may mean even if the Act passes in the Senate, it may do so without the ban on state AI regulations.
What if the Federal Moratorium Takes Effect?
If the moratorium remains in the Act, citizens will need to rely solely on the federal government to safeguard their interests when it comes to the use of AI. In this regard the federal government had also been active in May, with the President signing the Take It Down Act into law on May 19th. This law requires platforms to establish processes for an identified individual or authorized person to contact the platform and submit a request to remove intimate visual depictions.
However, prior to the introduction of the One Big Beautiful Bill Act, President Donald Trump signed “The Removing Barriers Executive Order which calls for federal departments and agencies to revise or rescind all policies, directives, regulations, and other actions taken by the Biden administration that are ‘inconsistent’ with ‘enhanc[ing] America’s global AI dominance.’”
Centralization vs. Decentralization: What’s at Stake?
With the federal government restricting the states in their ability to protect citizens, the raising expectation is that the federal government will strengthen—not dismantle—existing protections. Absent robust state action, the future of AI oversight in the United States may shift from proactive regulation to reactive crisis management—a risk with profound implications for innovation, privacy, and public trust. The decision to centralize or decentralize AI oversight will shape not just regulatory priorities, but the balance of power between innovation and accountability in the years to come.