Eng

California's vetoed AI safety bill prompts debates on regulation

XINHUA
發布於 8小時前 • Wen Tsui,Ceng Hui,Xu Jianmei,Wu Xiaoling
A woman experiences AI 3D technology at the exhibition area of French company Dassault Systèmes during the 2023 Consumer Electronics Show (CES) in Las Vegas, the United States, Jan. 8, 2023. (Photo by Zeng Hui/Xinhua)

The outcome of California's AI regulation efforts is expected to have far-reaching implications, given its leading position in tech-related legislation, such as data privacy.

by Wen Tsui

廣告(請繼續閱讀本文)

SACRAMENTO, the United States, Oct. 1 (Xinhua) -- Governor of the U.S. state of California Gavin Newsom's recent veto of a bill on artificial intelligence (AI) safety has ignited a nationwide debate over how to effectively govern the rapidly evolving technology while balancing innovation and safety.

On Sunday, Newsom vetoed SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, saying the bill is not "the best approach to protecting the public from real threats posed by the technology."

In his veto message, the governor said the bill "magnified" the potential threats and risked "curtailing" innovation that drives technological development.

廣告(請繼續閱讀本文)

The vetoed bill, introduced by California State Senator Scott Wiener, had passed the California legislature with overwhelming support. It was intended to be one of the first in the country to set mandatory safety protocols for AI developers.

If signed into law, it would have placed liability on the developers for severe harm caused by their models. Designed to prevent "catastrophic" harms by AI, the bill would apply to all large-scale models that cost at least 100 million U.S. dollars to train, regardless of the potential damage.

The bill would require AI developers to publicly disclose the methods for testing the likelihood of the model causing critical harm and the conditions under which the model would be fully shut down before training began.

廣告(請繼續閱讀本文)

Violations would be enforceable by the California Attorney General with the civil penalty of up to 10 percent of the cost of the quantity of computing power used to train the model and 30 percent for any subsequent violation.

This photo taken on April 20, 2024 shows a competition robot designed by two 11th grade students at the AI Robotics Academy, an after-school club, in Plano, Texas, the United States. (Photo by Lin Li/Xinhua)

According to an analysis by Pillsbury Winthrop Shaw Pittman LLP, a law firm specializing in technology, the bill could have a "significant" impact on large AI developers, entailing "significant testing, governance, and reporting hurdles" for those companies.

The bill's broad scope has sparked a debate around whose behavior should be regulated - the developers or deployers of AI models.

Some in the tech industry urge lawmakers to focus on the contexts and use cases of AI rather than the technology itself.

Lav Varshney, associate professor of electrical and computer engineering at the University of Illinois at Urbana-Champaign, told technology website VentureBeat that the vetoed bill would have unfairly penalized original developers for the actions of those using the technology.

He advocated for "a shared responsibility" among original developers and those who fine-tune AI for specific applications.

Many experts raised concerns about the bill's potential "chilling effect" on open-source AI, a collaborative approach to AI development that allows developers to access, modify, and share AI technologies.

Andrew Ng, co-founder of Coursera, a U.S. online course provider, praised Newsom's veto as "pro-innovation" in a social media post, saying it would protect open-source development.

In response to the veto, Anja Manuel, executive director of the Aspen Strategy Group, said in a statement that she advocated for "limited pre-deployment testing, focused only on the largest models."

She pointed to a lack of "mandatory, independent and rigorous" testing to prevent AI from doing harm, which she called a "glaring gap" in the current approach to AI safety.

Drawing parallels to the Food and Drug Administration's regulations in the pharmaceutical industry, Manuel argued that AI, like drugs, should only be released to the public following thorough testing for safety and efficacy.

Following the veto, Governor Newsom outlined alternative measures for AI regulation, calling for a more focused regulatory framework that addresses specific risks and applications of AI rather than broad rules that could affect even low-risk AI functions.

The outcome of California's AI regulation efforts is expected to have far-reaching implications, given its leading position in tech-related legislation, such as data privacy.

"What happens in California doesn't just stay in California; it often sets the stage for nationwide standards," said the Pillsbury analysis.

An expert of artificial intelligence (AI) delivers a speech at the San Francisco AI Summit in San Francisco, the United States, Sept. 25, 2019. (Xinhua/Wu Xiaoling)

Be it developers or deployers of AI systems, Pillsbury advised companies to develop a comprehensive compliance strategy and take a proactive approach, given the fast-evolving regulatory landscape around the world.

"Safe and responsible AI is essential for California's vibrant innovation ecosystem," said Fei-Fei Li, professor in the Computer Science Department at Stanford University and co-director of Stanford's Human-Centered AI Institute. "To effectively govern this powerful technology, we need to depend upon scientific evidence to determine how to best foster innovation and mitigate risk." ■

更多 Eng 相關文章

Alcaraz beats Sinner to claim China Open title
XINHUA
The Future of Aerotropolis: Innovations and Strategies Shaping the Aviation Industry
PR Newswire (美通社)
LANDI Global Unveils Flagship C20 Pro: Enhancing SMB Retail Efficiency and Customer Experience with Android-Powered ECR terminal
PR Newswire (美通社)
ROSHI Unveils Comprehensive Report on the Future of Digital Lending, Highlighting Global Trends for 2025 and Beyond
PR Newswire (美通社)
WTT China Smash: Ma Long, Wang Manyu cruise into singles last 16
XINHUA
Mars unveils the world's largest pet parent study to better understand and serve the over one billion pets - and growing - across the globe
PR Newswire (美通社)
Xinhua News | Yemen's Houthis claim fresh missile attack on Israel
XINHUA
Re.juve Introduces Tropic Joy - Singapore's First Cold Pressed Juice Featuring Premium Kiwis and Strawberries
PR Newswire (美通社)
Israeli troops infiltrate southern Lebanese villages: source
XINHUA
InPics: Specialty jujubes enter harvest season in Lingwu of NW China
XINHUA
Xinhua News | Israeli FM declares UN Secretary-General Guterres "persona non grata"
XINHUA
Xinhua News | China ranks first in driving world economic growth from 1979 to 2023
XINHUA
GLOBALink | Iran's FM says military action against Israel concluded
XINHUA
CNN's 'Next Stop' visits Mongolia where digitalization is modernizing its traditional nomadic way of life
PR Newswire (美通社)
Hetero Signs Voluntary Licensing Agreement with Gilead to Transform Global HIV response, Expanding Access to Groundbreaking Lenacapavir to 120 high-incidence countries
PR Newswire (美通社)
Macao SAR celebrates National Day with diverse events
XINHUA
Grand parade staged to celebrate China's National Day in Japan's Chinatown
XINHUA
Blackpink’s Jennie in blonde, Dior embracing athleticism and more: The best moments of the Paris Fashion Week 2024
Tatler Hong Kong
Xinhua News | China files complaint at WTO over Canada's unilateralism, trade protectionism practices
XINHUA
InPics: Chinese people enjoy the National Day holiday in various ways
XINHUA
Xinhua News | Mongolia's capital integrates China-made articulated buses into public transport
XINHUA