A controversial bill to regulate the Artificial Intelligence (AI) industry, SB-1047, has been passed by the California’s State Assembly Appropriations Committee. It will pass the California Senate by the end of this month before going to the Democrat Governor, Gavin Newsom, for signature to pass into law. The most controversial part of the debate is the question of who is legally responsible and takes the blame if the AI causes harm. Should the AI system be blamed or the person who used the AI? That is the question that runs through the political debate over SB-1047, and the larger question of how to regulate the technology.
This type of debate happened recently when X released the second generation of its AI model, Grok, which has an image generation feature like OpenAI’s DALL-E. X is known for its alleged slack approach to content moderation, and the latest version of Grok has faced similar criticism of its training model.
The bill’s supporters say it will create controls to prevent rapidly advancing AI models from causing disastrous incidents, such as shutting down critical infrastructure. Their main concern is that technology is developing faster than its human creators can control.
California’s AI Act is particularly important as SB-1047 will set the precedent for state guidelines across the US in setting down the rules for developers working on generative AI.
The key points of the proposed legislation are:
- Create safety and security protocols for covered AI models.
- Ensure such models can be shut down completely.
- Prevent the distribution of models capable of what the act defines as “critical harm.”
- Retain an auditor to ensure compliance with the act.
These issues are not new. In the 1990s, Internet service providers like Prodigy and Compuserve faced lawsuits related to potentially libelous material that their users had posted. The US 1996 Communications Decency Act protects the freedom of expression online by shielding intermediaries from civil liability for third-party content. The intention was to protect the freedom of expression online by shielding intermediaries from civil liability for third-party content and to specify that technology companies, in most cases, cannot be held legally liable for what their users post.
Technology companies would be pleased to see a kind of Section 230 for AI, making them immune to prosecution for what their users do with their AI tools. However, the California bill takes the opposite approach, placing responsibility on the technology companies to assure the government that their products will not be used to create harm.
SB-1047 does have some widely accepted provisions, such as adding legal protections for whistleblowers at AI companies and studying the feasibility of building a public AI cloud that startups and researchers could use. More controversially, it requires makers of large AI models to notify the government when they train a model that exceeds a certain computing threshold and costs more than $100 million.
See: https://redskyalliance.org/xindustry/why-generative-ai-requires-a-new-security-testing-strategy
It allows the California attorney general to seek an injunction against companies that release models that the AG considers unsafe. It also requires that large models have a “kill switch” that allows developers to stop them in the case of danger.
This article is shared at no charge for educational and informational purposes only.
Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. We provide indicators of compromise information via a notification service (RedXray) or an analysis service (CTAC). For questions, comments, or assistance, please contact the office directly at 1-844-492-7225, or feedback@redskyalliance.com
- Reporting: https://www.redskyalliance.org/
- Website: https://www.redskyalliance.com/
- LinkedIn: https://www.linkedin.com/company/64265941
Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings
https://register.gotowebinar.com/register/5378972949933166424
Comments