Ekhbary
Friday, 06 February 2026
Breaking

Don’t Regulate AI Models. Regulate AI Use

A Strategic Shift for U.S. Policy Amidst Global AI Governanc

Don’t Regulate AI Models. Regulate AI Use
Matrix Bot
1 day ago
37

United States - Ekhbary News Agency

Don’t Regulate AI Models. Regulate AI Use

The global landscape of Artificial Intelligence (AI) regulation is rapidly evolving, with nations pursuing distinct strategies to govern this transformative technology. From China's provider-centric rules to Europe's comprehensive AI Act and India's emerging governance framework, the international community is grappling with how best to harness AI's potential while mitigating its risks. Amidst this complex geopolitical and technological race, the United States finds itself at a crossroads, with individual states enacting their own rules even as federal efforts aim to create a more unified approach. A key question emerges for American engineers and policymakers: what regulatory framework can the U.S. effectively implement to reduce tangible, real-world harm?

John deVadoss, a seasoned technologist and co-founder of NeuralFabric (acquired by Cisco), argues for a paradigm shift in AI governance. Instead of attempting to regulate the intricate, often elusive, underlying AI models, deVadoss proposes focusing regulatory efforts on the *use* of AI systems. This approach, he contends, is not only more practical but also more aligned with the realities of digital technology and existing legal frameworks, particularly in the United States.

The current global regulatory trend exhibits significant divergence. China's initial AI regulations, issued in 2021, placed a strong emphasis on content governance and provider accountability, enforced through platform oversight and record-keeping requirements. Europe, with its landmark EU AI Act (effective 2024), is already seeking updates and simplifications, indicating the dynamic nature of AI governance. India, in November 2025, released its AI governance system developed by senior technical advisors. Meanwhile, the U.S. presents a more fragmented picture. State-level legislation is emerging, creating a patchwork of rules, while federal initiatives in 2025 have sought to centralize control and potentially loosen existing reins, leading to significant debate and uncertainty.

DeVadoss critiques proposals like licensing "frontier" training runs or restricting open-weight models, citing California's Transparency in Frontier Artificial Intelligence Act as an example. He argues these measures, while well-intentioned, offer the illusion of control rather than effective governance. The fundamental challenge lies in the nature of AI models themselves. Model weights and code are essentially digital artifacts. Once released into the digital ether—whether through legitimate research, accidental leaks, or actions by foreign competitors—they can be replicated at virtually zero cost. Attempts to control or "unpublish" these artifacts are akin to trying to catch smoke. Efforts to geofence research or prevent the distillation of large models into smaller, more accessible ones are technically difficult, if not impossible, to enforce globally.

The consequences of attempting to regulate models directly are twofold and detrimental. First, compliant companies, particularly startups and smaller enterprises, risk being buried under burdensome paperwork and complex compliance procedures, potentially stifling innovation and economic growth. Second, less scrupulous actors can easily circumvent these regulations by operating offshore, utilizing underground networks, or exploiting loopholes. This creates an uneven playing field where ethical actors are penalized, and unethical ones gain an advantage.

Furthermore, in the U.S. context, regulating the publication of AI models faces significant legal hurdles related to free speech. Federal courts have historically treated software source code as a form of protected expression. Consequently, any regulatory regime that seeks to prevent or heavily restrict the publication and dissemination of AI models could face substantial legal challenges, potentially being deemed unconstitutional.

The alternative to strict model regulation is not a laissez-faire approach. DeVadoss emphasizes that inaction is not an option. Without appropriate guardrails, society will continue to witness the proliferation of harmful AI applications, including sophisticated deepfake scams, automated financial fraud, and pervasive mass-persuasion campaigns. Such unchecked proliferation often leads to a crisis point, forcing a reactive, politically motivated response that prioritizes optics over effective outcomes.

A use-based regulatory regime offers a more pragmatic solution. This approach involves classifying AI deployments based on their potential risk level and scaling regulatory obligations accordingly. For instance, high-risk applications in critical sectors like healthcare, finance, or employment would be subject to more stringent oversight, testing, and accountability measures than lower-risk applications in areas like entertainment or general productivity. This framework ensures that regulatory efforts are focused precisely where AI systems interact with people and where the potential for harm is most significant. It allows for flexibility, adaptability, and a more targeted approach to enforcement, making it more effective in mitigating real-world risks without unduly hindering technological progress.

John deVadoss brings a wealth of experience to the discourse on AI governance. He is the co-founder of NeuralFabric, a company specializing in domain-specific foundation models, which was acquired by Cisco. He also co-founded the InterWork Alliance, where he developed the groundbreaking Token Taxonomy Framework for Digital Assets. His extensive career includes leadership roles at Microsoft, where he pioneered Service-Oriented Architecture (SOA), architected the .NET framework, and led initiatives like Microsoft Digital. He holds a Ph.D. in AI from the University of Massachusetts, Amherst.

Keywords: # AI regulation # artificial intelligence # AI models # AI use # AI governance # United States policy # John deVadoss # NeuralFabric # Cisco # Microsoft # technology law # free speech # risk-based regulation