Filtered By: Scitech
SciTech

Battle rages over US's first binding AI safety bill in California


Battle rages over US's first binding AI safety bill in California

NEW YORK - California is poised to enact the United States' most significant piece of legislation around artificial intelligence, which could help reshape governance of the technology worldwide.

The bill, called SB 1047, would be the first binding U.S. legislation aimed squarely at preventing AI-linked catastrophes, representing a significant break from the United States' traditionally hands-off approach to regulating the industry.

It recently cleared California's legislature with a large majority and is now entirely in the hands of Governor Gavin Newsom, who has until Sept. 30 to sign or veto it.

But the legislation has exposed a deep divide across the tech industry and the political establishment, upending the usual coalitions. Critics worry it will hurt innovation in California and complain it uses liability to keep companies in check, while providing only vague rules for them to follow. Some say such regulation should be implemented at the federal level.

Former House Speaker Nancy Pelosi has allied with Trump-supporting venture capitalists such as Marc Andreessen to kill the bill; billionaire Elon Musk has endorsed it, lining up with powerful labor groups and the feminist group NOW.

Asked to comment on SB 1047, Newsom's office told the Thomson Reuters Foundation they "don't typically comment on pending legislation."

The governor, however, broke his silence on the bill this week, saying he was worried about the "outsized" impact the legislation could have.

"(AI) is a space where we dominate, and I want to maintain our dominance… At the same time, you feel a deep sense of responsibility to address some of those more extreme concerns,” he told the Salesforce conference in San Francisco.

He told the LA Times he had not yet made a decision on the bill.

The United States has lagged global efforts to regulate the technology. The EU passed its sweeping AI Act in 2023, Britain's new Labour government pledged to introduce "binding regulation" on top AI companies, while experts say China has taken a sharply growing interest in safeguarding against the technology's extreme risks.

Deeply divisive

Authored by California Senator Scott Wiener, the legislation would primarily require developers of the next generation of AI models like ChatGPT to implement safety measures designed to prevent catastrophes, defined as incidents resulting in at least $500 million in damage or mass casualties.

The bill would apply to AI firms that do business in California and develop models that cost at least $100 million to train. It would also allow California's attorney general to sue developers of covered AI models that cause disasters if they did not implement appropriate safeguards.

While President Joe Biden's executive order on artificial intelligence issued in October 2023 is considered more wide-reaching, it does not include SB 1047's enforcement provisions.

Leaders of top AI companies have frequently warned about the existential threat of AI and asked to be regulated, but many have criticised the legislation.

Tech giants OpenAI, Meta, and Google all wrote letters to California lawmakers expressing their opposition, warning the bill could prompt AI developers to leave the state.

"S.B. 1047’s uncertain definitions and harsh penalties, tied to unreasonable foreseeability and liability provisions, are likely to drive AI innovation out of California," said Google in a letter from July.

A similar argument was made by eight Democratic members of Congress from California in a letter to Newsom in August. They also said the bill did not address the imminent risks from artificial intelligence, including misinformation and discrimination.

Nancy Pelosi expressed her opposition in what appears to be the first time she stood against a piece of state-level legislation from a member of her own party.

Wiener has sought to address the criticism, saying in an open letter in August that "SB 1047 addresses severe and very tangible risks to national security and public safety."

He told the Thomson Reuters Foundation that the bill was "very flexible" -- not vague -- so companies could continue to innovate and safeguards could evolve along with the industry. While he also preferred AI to be regulated at the federal level, he said California needed to take action in the absence of this.

"We need to be clear that Meta and all of the other large (AI) labs have all committed to perform safety testing, which is the heart of this bill," said Wiener.

Dario Amodei, CEO of AI start-up Anthropic, said on a podcast that the idea that firms would leave California because of SB 1047 was "just theater". California is the world's fifth-largest economy and is home to the top-five generative AI companies.

Some of the critics also appeared to have financial ties to the industry.

A 2023 financial disclosure report for Nancy Pelosi showed her husband owned between $16 million and $80 million in stocks and options in Google, Amazon, Microsoft, and Nvidia.

When asked for a comment, her office said "Speaker Pelosi does not own any stocks, and she has no prior knowledge or subsequent involvement in any transactions."

Prominent AI researcher Fei-Fei Li argued that SB 1047 would "harm our budding AI ecosystem" in a Fortune opinion piece that was cited by Pelosi in her statement.

Li set up a billion-dollar startup to make AI technology in April, which received significant financial backing from Andreesen Horowitz, a leading AI investor behind a campaign against SB 1047.

Asked whether this posed a conflict of interest, Li said in an email: "On the contrary, it is my firm belief that a healthy technology ecosystem is good for our country and our world."

Existential threat?

The central debate is around whether near-future AI systems could cause the disasters the bill aims to prevent, said Dean Ball, research fellow at the Mercatus Center of George Mason University.

If AI capabilities develop as SB 1047 supporters expect, the need for regulation will become clearer, he added.

But some supporters are concerned there will not be enough time to act later.

"Progress in AI capabilities is rapid these days," said Daniel Kokotajlo, an OpenAI whistleblower who shared evidence that the company forced outgoing employees to sign lifetime non-disparagement agreements to retain their vested equity.

"Corporations are incentivized to race each other and to say — and believe — that doing so is what's best for humanity."

On Thursday, OpenAI announced a new reasoning model which it said was capable of surpassing human experts in a number of technical benchmarks for the first time.

The new model received the company's first "medium" rating on bioweapon risk, indicating it is the first model to be able to meaningfully assist experts in the production of bioweapons beyond just access to the internet. — Reuters