AI bill opens up bitter rift in Silicon Valley

Elon Musk has had enough of California. In the next few weeks he will close the San Francisco headquarters of X, his social media company, and move it to Texas after the governor passed a law banning schools from telling parents if their children decide to go by a different gender.

“This is the final straw,” he tweeted. He is also redomiciling SpaceX out of California. Before he moved Tesla’s home base to Texas in 2021 in a fit of outrage at Covid lockdowns, he wrote: “If a team has been winning for too long, they do tend to get a little complacent, a little entitled and then they don’t win the championship anymore. California has been winning for too long.”

All of which makes his decision last week to back the state’s first-of-its-kind artificial intelligence (AI) safety bill so surprising. “All things considered, I think California should probably pass the AI safety bill,” he wrote. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk.”

The proposed law — Gavin Newsom, the governor, has until September 30 to sign it into law — has led to bitter division in Silicon Valley. It would require developers to build a “kill switch” into their systems. It would set up an AI regulator and require companies to pay for regular independent audits of their models and to monitor how they are used by third parties. Companies would also face steep fines if an AI, either through carelessness or malice, creates “novel threats to public safety and security, including by enabling the creation and the proliferation of weapons of mass destruction, such as biological, chemical and nuclear weapons as well as weapons with cyber-offensive capabilities”.

Opponents including Sam Altman, the billionaire chief of OpenAI, claim this will muck up the progress of a technology poised to change the world. Yann LeCun, Meta’s AI chief, said it was based on “completely hypothetical science-fiction scenarios” and warned that it would “put an end to innovation”.

The outcry feels both over the top and a bit amusing. Wind the clock back a year and many in the industry, Altman chief among them, were begging to be regulated. Altman went to Capitol Hill to ask for a law that would require government permission before a company released a powerful new model. With the launch of ChatGPT in November 2022, OpenAI had stolen a march on the industry.

Bill Gurley, the prominent tech investor, said the pleas for regulation smacked of a company seeking to pull up the ladder behind it. “Many times, the incumbent that sought to be regulated had such a hand in the creation of the regulation that they tilted the scales in favour of themselves,” he said last year. “The level of [lobbying] effort is unprecedented.”

Gavin Newsom, the governor of California, has until September 30 to sign the AI safety bill into law

Scott Wiener, the Democrat who represents San Francisco and who authored the bill, said the “light-touch” measure simply “codifies commitments that the largest AI companies have already voluntarily made”.

He watered down key aspects after talking to industry, including removing the ability of the state attorney-general to sue companies for negligence before something has gone wrong. The bill’s requirements apply to models that cost at least $100 million to train, a threshold that captures the most sophisticated models but leaves the majority of start-ups to operate unencumbered. Some, however, will be swept up by the provision that includes companies that don’t develop their own model but spend at least $10 million to tinker with another company’s system.

The kerfuffle is perhaps not surprising. AI represents a bonanza on a level not seen since the dawn of the internet. OpenAI, a non-governmental organisation that only launched its for-profit arm in 2019, is reported to be raising a new round of funding that will value it at an astonishing $100 billion. Musk raised $6 billion in May to build a supercomputing cluster to power Grok, the chatbot he has incorporated into X. He has called AI “the most disruptive force in history”.

The fight for supremacy is fierce, manifested in a recruiting war that is leading to seven-figure pay deals for rank-and-file AI engineers and a race that virtually every month results in a novel feature or service being rolled out.

The California law would be the first passed in America and could set a standard that others follow. Rishi Sunak, as prime minister, convened the AI Safety Summit at Bletchley Park last year. The upshot was a “landmark” agreement for the top developers to subject their models to testing by a newly established AI Safety Institute. The accord, however, was entirely voluntary and most companies have ignored the pledge.

And yet the regulatory landscape is evolving with astounding speed. Even as it registered its opposition to California’s move, OpenAI and the San Francisco rival Anthropic agreed last week to share their most advanced models with the US AI Safety Institute, a government body set up this year on the order of President Biden.

Geoff Hinton says powerful AI systems “bring incredible promise, but the risks are also very real”

Geoff Hinton, the British-Canadian computer scientist who is seen as the “godfather of AI” and who quit Google last year so he could speak more freely on the dangers of the technology, supports the California measure.

He said: “Forty years ago when I was training the first version of the AI algorithms behind tools like ChatGPT, no one — including myself — would have predicted how far AI would progress. Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously.”

Post Comment