Out of the two pieces of legislation, California’s bill is most honed in on impacting open-source AI, and its stringent regulations and requirements imposed on “frontier models” could severely hinder open-source AI development and innovation. The legislation creates a regulatory environment that could significantly discourage the collaborative, distributed development processes underpinning open-source projects.
Section 22603 of the bill requires developers of covered models to implement safety and security protocols, conduct impact assessments, and provide detailed documentation. Notably, developers must implement a protocol that “provides reasonable assurance” they won’t produce models posing “unreasonable risk of causing or enabling a critical harm.” This particular requirement is fundamentally flawed.
As Princeton computer scientist Arvind Narayanan argues, AI safety is “not a model property,” but it depends on deployment context and broader socio-technical systems. The bill’s approach to safety could unreasonably burden developers, especially in the open-source community, by disincentivizing them from widely distributing their models. Developers could end up restricting access to their models or abandoning open distribution altogether for fear of being held responsible for the models’ potential misuse or unintended consequences in applications, which those developers didn’t anticipate or approve. This chilling effect on model sharing could significantly impact the collaborative nature of open-source AI development, where iteration and improvement often rely on broad access to existing models.
These disincentives for model sharing and distribution could lead to a more closed, proprietary AI development ecosystem, further constraining competition and limiting the benefits of open-source collaboration.
Meanwhile, Colorado’s law, while not directly addressing open-source AI, provides exemptions for smaller deployers and certain research activities. This could indirectly benefit open-source projects by reducing regulatory burdens on smaller entities and academic researchers, potentially supporting a more open environment for model sharing and collaborative development.
These contrasting approaches to open-source AI highlight a crucial challenge in AI regulation: balancing safety and accountability with the need for open innovation. California’s approach, while aiming to address potential risks, could inadvertently concentrate AI development into a couple institutions and companies that have the resources to navigate complex regulatory requirements. Colorado’s more flexible approach might better preserve the diverse, collaborative ecosystem that has driven much of AI’s—and the internet’s—rapid progress.
Source link : http://www.bing.com/news/apiclick.aspx?ref=FexRss&aid=&tid=66ccb1c6773d4b98a58e0946bd70acde&url=https%3A%2F%2Fwww.newamerica.org%2Foti%2Fblog%2Fwhat-us-policymakers-should-know-about-californias-and-colorados-ai-legislation%2F&c=10377561037018890489&mkt=en-us
Author :
Publish date : 2024-08-26 05:42:00
Copyright for syndicated content belongs to the linked Source.