On Sunday (September 29), California Governor Gavin Newsom made headlines by saying "no" to a big law about artificial intelligence (AI) safety.
The law wanted to create the first set of rules in the country for large AI models. This decision has started a conversation about how to keep people safe while allowing technology to grow.
California Aims to Enhance AI Safety, Public Protection
The proposed law was meant to protect the public from possible dangers linked to powerful AI models. Supporters believed it could prevent bad uses of AI, like using it to mess up the state's power supply or even make harmful chemicals.
The bill required companies to test their AI models and share their safety practices. This way, workers could speak up about problems without being scared of losing their jobs. However, the tech industry was worried, saying that the rules could push AI companies away from California and slow down new ideas.
Democratic state senator Scott Wiener, who helped create the bill, was upset with Newsom's decision. According to CNN, he said that the veto was "a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet."
He also pointed out that while the tech industry promised to keep an eye on AI risks, he felt that "voluntary commitments are often ineffective for public safety." Wiener promised to keep fighting for AI safety rules in future talks.
This proposed law was part of California's bigger plan to control AI technologies and lessen risks like deepfakes and job loss. Lawmakers stressed the need to act fast, mentioning past mistakes in not regulating social media companies. The talks about AI safety are becoming more important, especially since California has 32 of the world's top 50 AI companies.
Also Read: Boeing's Guilty Plea Draws Family Outcry; Federal Hearing Scheduled for Next Month
Supporters and Critics Speak Out
Notable supporters of the bill included Elon Musk, who argued that it could make AI developers more open and responsible. On the other hand, some critics, like former House Speaker Nancy Pelosi, believed that the law would hurt California's tech industry by scaring away innovation and big investments, said AP News.
Even after this setback, experts and advocates think it's really important to keep talking about AI safety. Daniel Kokotajlo, a former researcher at OpenAI, shared his worries about the growing power of AI companies without anyone keeping them in check.
After the veto, Newsom said he plans to work with business leaders and AI experts, including a famous AI pioneer named Fei-Fei Li, to create "workable guardrails" for AI technologies. He explained that these guidelines would focus on science-based analysis and looking at risks, especially concerning big incidents related to AI use.
California has already started looking into the risks that AI can pose to important services. Newsom's team has talked with energy providers and will expand this evaluation to other areas, like water and communications.
The debate over AI safety rules in California shows the difficulties lawmakers face in finding a balance between encouraging new technology and keeping people safe. As AI continues to grow quickly, it is important for lawmakers and industry leaders to work together to make good rules that protect the public without stopping progress.
Related Article: OpenAI Faces $5 Billion in Losses This Year Amid Massive Revenue Surge