On Tuesday, California governor Gavin Newsom signed five new laws aimed at regulating the artificial intelligence sector, positioning them as some of the strictest in the United States. Three of these laws are aimed at controlling deepfakes in elections, while the other two protect actors by prohibiting the creation of clones of their bodies or voices without their consent.
According to the governor's office, "California, home to most of the leading AI companies, is working to harness these transformative technologies and address the challenges they pose, while studying the risks they present."
Restrictions on election deepfakes
One of the new laws, the AB 2655, requires digital platforms such as Facebook or X to tag or remove election-related deepfakes. In addition, candidates and elected officials will be able to seek legal action if the platforms fail to comply. Another law, AB 2355, requires clear indication of when a political advertisement is AI-generated.
These measures seek to prevent the use of false or manipulated content to influence election results, a practice that has already generated controversy nationwide. The Federal Communications Commission (FCC) has proposed similar regulations nationwide, banning, for example, automated calls with AI-generated voices.
The other two laws signed into law, AB 2602 and AB 1836, respond to pressure from the actors' union SAG-AFTRA. These rules require studios to obtain an actor's consent before creating a digital replica of his or her voice or image. It also prohibits the creation of digital replicas of deceased performers without the consent of their heirs.
With these regulations, California is seeking to curb the risks posed by the use of AI in key sectors such as entertainment and politics, while continuing to debate other AI-related legislative proposals.