Artificial intelligence has the power to revolutionize industries, drive economic growth, and improve our quality of life. But like any powerful, widely available technology, AI also poses significant risks.
California’s now vetoed legislation, SB 1047 — the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act — sought to combat “catastrophic” risks from AI by regulating developers of AI models. While lawmakers should be commended for trying to get ahead of the potential dangers posed by AI, SB 1047 fundamentally missed the mark. It tackled hypothetical AI risks of the distant future instead of the actual AI risk of today, and focused on organizations that are easy to regulate instead of the malicious actors that actually inflict harm.
The result was a law that did little to improve actual safety and risks stifling AI innovation, investment, and diminishing the United States’ leadership in AI. However, there can be no doubt that AI regulation is coming. Beyond the EU AI Act and Chinese laws on AI, 45 US states introduced AI bills in 2024. All enterprises looking to leverage AI and machine learning must prepare for additional regulation by boosting their AI governance capabilities as soon as possible.
Addressing unlikely risks at the cost of ignoring present dangers
There are many real ways in which AI can be used to inflict harm today. Examples of deepfakes for fraud, misinformation, and non-consensual pornography are already becoming common. However, SB 1047 seemed more concerned with hypothetical catastrophic risks from AI than with the very real and present threats that AI poses today. Most of the catastrophic risks envisioned by the law are science fiction, such as the ability of AI models to develop new nuclear or biological weapons. It is unclear how today’s AI models would cause these catastrophic events, and it is unlikely that these models will have any such capabilities for the foreseeable future, if ever.
SB 1047 was also focused on commercial developers of AI models rather than those who actively cause harm using AI. While there are basic ways in which AI developers can ensure that their models are safe — e.g. guardrails on generating harmful speech or images or divulging sensitive data — they have little control over how downstream users apply their AI models. Developers of the giant, generic AI models targeted by the law will always be limited in the steps they can take to de-risk their models for the potentially infinite number of use cases to which their models can be applied. Making AI developers responsible for downstream risks is akin to making steel manufacturers responsible for the safety of the guns or cars that are manufactured with it. In both cases you can only effectively ensure safety and mitigate risk by regulating the downstream use cases, which this law did not do.
Further, the reality is that today’s AI risks, and those of the foreseeable future, stem from those who intentionally exploit AI for illegal activities. These actors operate outside the law and are unlikely to comply with any regulatory framework, but they are also unlikely to use the commercial AI models created by the developers that SB 1047 intended to regulate. Why use a commercial AI model — where you and your activities are tracked — when you can use widely available open source AI models instead?
A fragmented patchwork of ineffective AI regulation
Proposed laws such as SB 1047 also contribute to a growing problem: the patchwork of inconsistent AI regulations across states and municipalities. Forty-five states introduced, and 31 enacted, some form of AI regulation in 2024 (source). This fractured regulatory landscape creates an environment where navigating compliance becomes a costly challenge, particularly for AI startups who lack the resources to meet a myriad of conflicting state requirements.
More dangerous still, the evolving patchwork of regulations threatens to undermine the safety it seeks to promote. Malicious actors will exploit the uncertainty and differences in regulations across states, and will evade the jurisdiction of state and municipal regulators.
Generally, the fragmented regulatory environment will make companies more hesitant to deploy AI technologies as they worry about the uncertainty of compliance with a widening array of regulations. It delays the adoption of AI by organizations leading to a spiral of lower impact, and less innovation, and potentially driving AI development and investment elsewhere. Poorly crafted AI regulation can squander the US leadership in AI and curtail a technology that is currently our best shot at improving growth and our quality of life.
A better approach: Unified, adaptive federal regulation
A far better solution to managing AI risks would be a unified federal regulatory approach that is adaptable, practical, and focused on real-world threats. Such a framework would provide consistency, reduce compliance costs, and establish safeguards that evolve alongside AI technologies. The federal government is uniquely positioned to create a comprehensive regulatory environment that supports innovation while protecting society from the genuine risks posed by AI.
A federal approach would ensure consistent standards across the country, reducing compliance burdens and allowing AI developers to focus on real safety measures rather than navigating a patchwork of conflicting state regulations. Crucially, this approach must be dynamic, evolving alongside AI technologies and informed by the real-world risks that emerge. Federal agencies are the best mechanism available today to ensure that regulation adapts as the technology, and its risks, evolve.
Building resilience: What organizations can do now
Regardless of how AI regulation evolves, there is much that organizations can do now to reduce the risk of misuse and prepare for future compliance. Advanced data science teams in heavily regulated industries — such as finance, insurance, and healthcare — offer a template for how to govern AI effectively. These teams have developed robust processes for managing risk, ensuring compliance, and maximizing the impact of AI technologies.
Key practices include controlling access to data, infrastructure, code, and models, testing and validating AI models throughout their life cycle, and ensuring auditability and reproducibility of AI outcomes. These measures provide transparency and accountability, making it easier for companies to demonstrate compliance with any future regulations. Moreover, organizations that invest in these capabilities are not just protecting themselves from regulatory risk; they are positioning themselves as leaders in AI adoption and impact.
The danger of good intentions
While the intention behind SB 1047 was laudable, its approach was flawed. It focused on organizations that are easy to regulate versus where the actual risk lies. By focusing on unlikely future threats rather than today’s real risks, placing undue burdens on developers, and contributing to a fragmented regulatory landscape, SB 1047 threatened to undermine the very goals it sought to achieve. Effective AI regulation must be targeted, adaptable, and consistent, addressing actual risks without stifling innovation.
There is a lot that organizations can do to reduce their risks and comply with future regulation, but inconsistent, poorly crafted regulation will hinder innovation and will even increase risk. The EU AI Act serves as a stark cautionary tale. Its sweeping scope, astronomical fines, and vague definitions create far more risks to the future prosperity of EU citizens than it realistically limits actors intent on causing harm with AI. The scariest thing in AI is, increasingly, AI regulation itself.
Kjell Carlsson is the head of AI strategy at Domino Data Lab, where he advises organizations on scaling impact with AI. Previously, he covered AI as a principal analyst at Forrester Research, where he advised leaders on topics ranging from computer vision, MLOps, AutoML, and conversation intelligence to next-generation AI technologies. Carlsson is also the host of the Data Science Leaders podcast. He received his Ph.D. from Harvard University.
—
Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.