California lawmakers have introduced a new bill to regulate the development of large-scale AI models. The bill would mandate safety protocols, oversight, and compliance measures to prevent risks such as weapons development and infrastructure damage, which is essentially a 'panic button.'
The bill targets large tech companies such as OpenAI, Anthropic, and Cohere. It mandates security testing, risk assessments, a 'panic button' for dangerous models, disclosure of compliance efforts, and penalties for non-compliance.
The bill is supported by AI pioneers such as Geoffrey Hinton and Yoshua Bengio, who recognize AI's existential risks. However, it has raised concerns in the tech community. Some employees and founders are considering leaving California because they fear it will hinder business adoption, innovation, and open source projects.
Large AI organizations and tech companies are criticizing the bill, arguing that it could drive innovation out of California, create excessive liability, burden small AI companies and open-source developers, potentially slow the adoption of AI, and increase customer costs.
The bill has passed the state Senate and is awaiting a vote in the General Assembly in August. However, Gov. Gavin Newsom's office has declined to comment on the pending legislation and warns against over-regulating AI.
Senator Wiener has revised the bill to exempt open-source developers from liability for the harmful use of their platforms. The bill also provides for the establishment of a government agency to oversee AI developers and set guidelines for future high-performance AI models.
A major Silicon Valley venture capitalist told the Financial Times on Friday that he has received complaints from tech company founders who are considering leaving California in response to the proposed legislation.
"My advice to anyone asking about this is to stay and fight," he said. "But this will chill the open source and startup ecosystem. I think some founders will decide to leave the country.
Tech companies' biggest objections to the proposal are that it will stifle innovation by discouraging software engineers from taking risks with their products for fear of a hypothetical scenario that may never materialize.
Andrew Ng, an AI expert who has led projects at Google and Chinese company Baidu, told the FT: "If someone wanted to propose a regulation to stifle innovation, they could hardly do better."
Arun Rao, head of generative AI products at Meta, wrote on X last week that the bill was "unworkable" and would "kill open source."
The outcome of the bill could have an impact on public trust in AI systems. Industry voices emphasize the need for precise regulations to maintain innovation without increasing costs for customers.
"The net fiscal impact of destroying the AI industry and driving businesses away could be in the billions, as both businesses and the highest-paid workers would leave," Rao wrote.
California's approach could influence other states and shows the importance of coordinated efforts amid a patchwork of state AI laws while federal legislation is pending.