More

    Why California’s AI bill could hurt more than it helps – Press Enterprise Fitnessnacks

    - Advertisement -

    [ad_1]

    California’s proposed Safe and Secure Innovation for Frontier Artificial Intelligence Models Act attempts to improve safety by requiring developers to certify that their AI models are not dangerous. In truth, the law would slow down critical AI advancements in health care, education, and other fields by discouraging innovation and reducing competition.

    Over the past few years, AI has revolutionized diagnostics with algorithms that are increasingly capable of detecting diseases like cancer and heart conditions with unprecedented accuracy. AI-driven tools have streamlined the drug discovery processes, reducing the time and cost of bringing new treatments to market. In education, AI-powered platforms have further personalized learning experiences, adapting to individual students’ needs and improving engagement and outcomes.

    Freedom to develop has allowed for rapid experimentation and implementation of AI technologies, leading to remarkable advancements benefiting society. However, many people are concerned about the long-term impacts AI could have.

    California Senate Bill 1047, introduced by Sen. Scott Wiener, D-San Francisco, aims to prohibit worse-case harmful uses of AI, like creating or deploying weapons of mass destruction or using AI to launch cyberattacks on critical infrastructure, costing hundreds of millions in damage.

    To prevent these doomsday scenarios, the bill would require developers to provide a newly created government agency with an annual certification, affirming that their AI models do not pose a danger. This certification would be provided even before the training of the AI model begins. However, it is difficult to accurately predict all potential risks of a model at such an early stage. Moreover, the responsibility for causing harm should be on the actor who committed the wrongdoing, not the developer of the model. Holding developers responsible for all possible outcomes discourages innovation and unfairly burdens those who may have no control over how their models are used. This extensive compliance is costly, especially for small startups that don’t have legal teams. Developers of AI models are instead likely to leave California for friendlier jurisdictions to conduct their training activities and other operations.

    Violations of the law could lead to penalties that could reach up to 30% of the cost to create an AI model. For small businesses, this could mean devastating financial losses. The bill also introduces criminal liability dangers under perjury laws if a developer falsely, in bad faith, certifies their AI model as safe. That may sound straightforward, but the law’s ambiguous framework and unclear definitions put developers at the whims of how state regulators may perceive any glitches in their AI models. In an industry where experimentation and iteration are crucial to progress, such severe penalties could impact creativity and slow down advancements.

    While the bill intends to target only large and powerful AI models, it uses vague language that could also apply to smaller AI developers. The bill focuses on models that meet a high threshold of computing power typically accessible only to major corporations with significant resources. However, it also applies to models with “similar capabilities,” broad phrasing could extend the bill’s reach to almost all future AI models.

    The bill would also require all covered AI models to have a “kill switch” to shut them down to prevent imminent threats and authorize the state to force developers to delete their models if they fail to meet state safety standards, potentially erasing years of research and investment. While the shutdown requirement might make sense in dangerous situations, it is not foolproof. For instance, forcing a shutdown switch on an AI system managing the electricity grid could create a vulnerability that hackers might exploit to cause widespread power outages. Thus, while mitigating certain risks, this solution simultaneously exposes critical infrastructure to new potential cyberattacks.

    While the goal of safe AI is crucial, onerous demands and the creation of government bureaucracies are not the solution. Instead, policymakers should work with AI experts to create environments conducive to its safe growth.

    Jen Sidorova is a policy analyst, and Nicole Shekhovstova is a technology policy intern at Reason Foundation.

    [ad_2]

    Source link

    Fitnessnacks – #Californias #bill #hurt #helps #Press #Enterprise
    Courtesy : https://www.pressenterprise.com/2024/07/03/why-californias-ai-bill-could-hurt-more-than-it-helps/

    - Advertisement -

    Related articles

    Share article

    Latest articles

    Submit your Notre Dame mailbag questions after the loss to Northern Illinois Fitnessnacks

    What's on your mind after Notre Dame lost to Northern Illinois as a four-touchdown favorite? Source link Fitnessnacks - #Submit #Notre #Dame #mailbag...

    White Bean Hummus – Budget Bytes Fitnessnacks

    My family is a hummus family, through and through. My husband, stepdaughter, and I love to make White Bean Hummus, cut up a...

    With TV Drug Ads, What You See Is Not Necessarily What You Get – Fitnessnacks

    Triumphant music plays as cancer patients go camping, do some gardening, and watch fireworks in ads for Opdivo+Yervoy, a combination of immunotherapies to...

    Compound Exercises: Your Key to Mass & Strength Gain – Fitnessnacks

    To gain muscle and strength as quickly as possible, it’s not enough to just do any type of exercises—you have to do the...

    Nurses, residents confront rampant violence in dementia care facilities – Press Enterprise Fitnessnacks

    Violent altercations between residents in long-term care facilities are alarmingly common, research shows. Subscribe to continue reading this article. Already subscribed? To login...

    Trend

    Subscribe to stay updated.