Modifications and Suggested Approaches
Anthropic's endorsement comes just a month after the company proposed a series of amendments to SB 1047, a bill introduced by Senator Scott Wiener in February. In a letter sent to state leaders in July, Anthropic suggested placing greater emphasis on discouraging the creation of unsafe AI models rather than imposing strict regulations before any major incidents occur. They also suggested that companies should have the flexibility to set their own safety testing standards rather than strictly adhering to state-imposed regulations.
The bill was modified on August 19 to include several important changes. For instance, the scope of civil penalties was limited in cases where violations do not result in harm or imminent risk. Additionally, the language of the bill was adjusted: instead of requiring a "reasonable assurance" against potential harm, it now requires demonstrating "reasonable care." According to Nathan Calvin, senior policy advisor at the Center for AI Safety Action Fund, this change aims to clarify the bill's focus on testing and risk mitigation and aligns with the most common standards in civil liability.
Another significant modification was the reduction in size of a new government agency tasked with enforcing AI regulations. This agency, initially called the "Frontier Models Division," was renamed the "Frontier Models Board" and placed within the existing Government Operations Agency. The change also increased the number of board members from five to nine and expanded reporting requirements for companies, which must now submit safety reports to the state attorney general.
Impact on the AI Industry
Dario Amodei stated that the updated bill "seems to be halfway between our suggested version and the original bill." He believes that the proposed safety and protection protocols, as well as measures to mitigate harm and prompt companies to seriously consider the risks of their technologies, will significantly enhance the industry's ability to combat threats.
While Anthropic partially supports the bill, other tech companies have shown more resistance. OpenAI, for example, sent a letter this week expressing its opposition, as did Meta, which raised concerns that the new regulations could drive AI companies out of California and discourage the development of open-source AI.
This debate reflects the ongoing tension between innovation and safety in the artificial intelligence industry, as companies and lawmakers strive to find a balance that allows technological advancement without compromising public safety.