Skip to content

Amidst a Tug-of-War: The Future of AI Regulation in the United States

  • 5 min read

A federal initiative that would impose a 10-year moratorium on state and local AI regulation is on the brink of becoming law, with Senator Ted Cruz and other lawmakers pushing for its inclusion in a sweeping GOP budget bill ahead of the July 4 deadline. Advocates for the moratorium, including influential figures like OpenAI's Sam Altman and a16z's Marc Andreessen, contend that a fragmented approach to AI regulation could dampen American innovation, particularly as the global AI race with China intensifies.

Amidst a Tug-of-War: The Future of AI Regulation in the United StatesAmidst a Tug-of-War: The Future of AI Regulation in the United States

Yet, a chorus of dissenters, spanning Democrats and some Republicans, along with labor groups, AI safety nonprofits, and consumer rights advocates, warn that such a provision would hamstring states' ability to enact laws protecting consumers from AI-related harms, effectively granting powerful AI companies free rein with minimal oversight or accountability.

The "AI moratorium" was quietly inserted into the budget reconciliation bill, dubbed the "Big Beautiful Bill," in May. It aims to halt states from enforcing any laws or regulations concerning AI models, systems, or automated decision-making processes for a decade. This move could override existing state AI laws, such as California's AB 2013, which mandates transparency in the data used to train AI systems, and Tennessee's ELVIS Act, safeguarding artists from AI-generated impersonations.

The moratorium's scope is extensive, affecting a multitude of AI-related laws, as evidenced by Public Citizen's database. It reveals that many states have overlapping laws, potentially simplifying the regulatory landscape for AI companies. For instance, states like Alabama, Arizona, and California have made it illegal or created civil liability for distributing deceptive AI-generated media intended to sway elections.

The moratorium also jeopardizes significant AI safety bills on the cusp of becoming law, such as New York's RAISE Act, which would require large AI labs to publish comprehensive safety reports. Senator Cruz's maneuvering to include the moratorium in the budget bill was nothing short of inventive; since budget bill provisions must have a direct fiscal impact, he revised the proposal to make compliance a prerequisite for states to receive funds from the $42 billion Broadband Equity Access and Deployment (BEAD) program.

Cruz's subsequent revision, released on Wednesday, purportedly ties the requirement only to the new $500 million in BEAD funding. However, a closer look at the revised text indicates that it could also withdraw already-committed broadband funding from non-compliant states. Senator Maria Cantwell criticized this approach, stating that it "forces states receiving BEAD funding to choose between expanding broadband or protecting consumers from AI harms for ten years."

The provision currently finds itself in a stalemate. While Cruz's initial revision passed procedural review, ensuring the AI moratorium's inclusion in the final bill, recent reports suggest that talks have resumed, and the language of the AI moratorium is under active debate. The Senate is expected to engage in intense debate this week on budget amendments, including one that could strike the AI moratorium, followed by a rapid series of votes on the amendments.

Chris Lehane of OpenAI echoed the concerns of the moratorium's proponents, stating that the current piecemeal approach to AI regulation is ineffective and could have "serious implications" for the U.S. as it competes with China for AI supremacy. He引用d Vladimir Putin's assertion that the victor in the AI race will shape the world's future direction.

However, a closer examination of existing state laws reveals a different picture. Most state AI laws are not sweeping; they focus on protecting consumers and individuals from specific harms, such as deepfakes, fraud, discrimination, and privacy violations. These laws target AI use in areas like hiring, housing, credit, healthcare, and elections, with disclosure requirements and algorithmic bias safeguards.

TechCrunch has inquired with OpenAI and other tech giants about any state laws impeding their progress, questioning the complexity of navigating different state laws, especially considering the potential for AI to automate a wide range of white-collar jobs in the future. However, no answers have been forthcoming.

Emily Peterson-Cassin of Demand Progress countered the "patchwork argument," noting that companies, even the most powerful, comply with different state regulations routinely. Critics argue that the AI moratorium is less about innovation and more about evading oversight, as Congress has yet to pass any AI regulation laws.

Nathan Calvin of Encode, a nonprofit sponsoring several state AI safety bills, expressed his willingness to support federal AI safety legislation that preempts state laws. However, the AI moratorium, he argues, strips away leverage and the ability to compel AI companies to negotiate.

Notable Republican critics, such as Senator Josh Hawley and Senator Marsha Blackburn, have voiced concerns about states' rights and the provision's impact on protecting citizens and creative industries from AI harms. The opposition extends across party lines, with a recent Pew Research survey indicating that a majority of Americans favor more AI regulation and are skeptical of government and industry efforts in this area.

The Senate's timeline to vote on the bill has been updated, reflecting the ongoing debate and the potential implications of the AI moratorium for the future of AI governance and regulation in the United States.

Leave a Reply

Your email address will not be published. Required fields are marked *