The California Artificial Intelligence Transparency Act (CAITA) protects consumers by giving them the ability to determine if content has been generated by AI
Sacramento, CA — Today Governor Gavin Newsom signed into law SB 942, the California Artificial Intelligence Transparency Act (CAITA), authored by Senator Josh Becker (D-Menlo Park).
SB 942 is a significant advancement over AI transparency bills passed in the US to date because it requires large generative AI companies to label AI-generated content and provides consumers with an AI-detection tool to assess content’s origin when a label isn’t readily available.
“Going forward, it’s crucial that individuals know whether or not content was created by AI,” said Becker. “By signing this bill, Governor Newsom is providing Californians with essential tools to navigate the evolving digital landscape and solidifying our position as a leader in enacting sensible AI regulations that protect consumers without stifling innovation.”
“CAITA’s requiring large Gen AI providers to provide Application Programming Interfaces (APIs) to enable AI Detection will create an ecosystem of tools and technology that will empower consumers and provide needed GenAI transparency,” said Tom Kemp, a tech policy expert and author of Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy.
“CITED commends Governor Newsom and Senator Becker for their first in the nation leadership in taking this critically important step towards improving trust in our increasingly deceptive and incendiary Internet ecosystem. SB 942 provides important new tools that will give Internet users a fighting chance to determine if an AI-generated video or other post is fake as California leads the nation in efforts to safeguard democracy in the age of AI,” said Drew Liebert, the director of the California Initiative for Technology and Democracy (CITED), a projector Common Cause California.
SB 942 has three major components. First, SB 942 would require that starting in 2026, large Generative Artificial Intelligence (GenAI) providers — those with over 1 million monthly users – must give users the option to have a “manifest disclosure” (i.e., visible and easily perceived) placed on any GenAI-generated image, video, or audio content that the user has generated.
It would also require (i.e., not optional) that large GenAI providers label the same type of GenAI-generated content with a “latent disclosure” (i.e., imperceptible to the human eye).
Finally, SB 942 would require large GenAI providers to make an AI detection tool available at no cost that takes advantage of the manifest and/or latent disclosures on their generated content, thereby enabling users to help determine whether the content was created or altered by the provider’s GenAI system.
SB 942 has received strong support from prominent advocacy and privacy groups, including the Protection of the Educational Rights of Kids - Advocacy, Oakland Privacy, CITED, and Transparency Coalition.AI, and has no opposition.
SB 942 goes into effect on January 1, 2025.