Will AI Face the Same Governance Crisis as IoT? Lessons We Can’t Afford to Ignore

By Jay Patel, Founder of BySecIoT


While conducting research on AI security for one of our clients recently, a troubling pattern emerged—one that every cybersecurity professional should pay attention to. As I analyzed the current state of AI governance and security frameworks, I couldn’t shake an uncomfortable feeling of déjà vu.

We’ve been here before. And we didn’t handle it well the first time.

The IoT Governance Crisis: A Cautionary Tale

Let me take you back to the early 2000s. The Internet of Things was the next big revolution. Smart devices were going to transform how we lived and worked. Companies rushed to market with connected thermostats, cameras, refrigerators, and sensors. Innovation moved fast. Security and governance? Not so much.

Twenty years later, we’re still paying the price.

Despite decades of IoT deployment, we still lack unified security frameworks and comprehensive governance standards. The ecosystem remains dangerously fragmented. Different manufacturers follow different protocols. Security is often an afterthought, bolted on after vulnerabilities are exploited. Default passwords remain unchanged. Firmware updates are ignored. Critical devices operate with known vulnerabilities for years.

The consequences aren’t abstract. Businesses face breaches costing tens of thousands of dollars. Critical infrastructure has been compromised. Privacy violations are routine. Botnets of compromised IoT devices launch massive cyberattacks.

And here’s the uncomfortable truth: We saw this coming, and we still couldn’t prevent it.

Now, as I watch the rapid deployment of AI systems across every sector imaginable, I have to ask: Are we making the same mistake again?

AI: Innovation at Breakneck Speed, Governance Playing Catch-Up

Artificial intelligence is being integrated into our most critical systems at a pace that makes early IoT adoption look cautious by comparison. AI now powers:

  • Healthcare diagnostics and treatment recommendations
  • Financial trading algorithms managing billions
  • Autonomous vehicles navigating public roads
  • Criminal justice risk assessments
  • Hiring and employment decisions
  • Content moderation at global scale
  • Critical infrastructure management
  • Military and defense systems

The technology is powerful. The deployment is rapid. But the governance frameworks? They’re struggling to keep up.

Cybersecurity researchers are racing to develop AI security solutions, identify vulnerabilities, and establish best practices. But they’re doing so while AI systems are already operating in production environments, making consequential decisions that affect real people’s lives.

This is precisely the pattern we saw with IoT—deploy first, secure later. And we know how that story ends.

The Vulnerability That Keeps Me Up at Night

Let me illustrate why this matters with a concrete example that demonstrates the unique security challenges AI introduces.

Consider autonomous vehicles—not the distant future, but systems already being tested on public roads today. These vehicles use machine learning models that continuously learn from driver behavior to optimize performance and safety.

Sounds good, right? The AI observes your routes, driving patterns, and decision-making. It learns when you brake, how you navigate intersections, your response times. It adapts to make future trips smoother and safer.

But here’s the problem: What happens when it learns from unsafe behavior?

Imagine you’re having a terrible day. You’re stressed, running late, and you drive aggressively—taking risks you normally wouldn’t take. You cut someone off. You accelerate through a yellow light. You tailgate.

For a human, this is a one-time lapse in judgment. Tomorrow, you’ll drive normally again.

But for the AI? That reckless behavior becomes training data.

The model adapts. It learns that these aggressive patterns are acceptable—even optimal. And because AI models don’t distinguish between “good” and “bad” examples in the way humans do, your autonomous vehicle might now replicate those dangerous patterns in future trips.

This isn’t science fiction. This is data poisoning and model manipulation in action—a class of AI vulnerabilities that security researchers are actively working to address.

And autonomous vehicles are just one example. Consider:

  • Healthcare AI learning from biased or incorrect diagnoses
  • Financial AI adapting to fraudulent trading patterns
  • Hiring AI reinforcing discriminatory decision-making
  • Security AI learning from attacker behavior and normalizing threats

Each of these scenarios represents a different dimension of AI vulnerability—and we’re deploying these systems at scale while still developing the frameworks to secure them.

The Fragmentation Problem: History Repeating Itself

Just as with IoT, we’re seeing AI security and governance develop in a fragmented way:

Geographic Fragmentation:

  • The EU has the AI Act
  • Canada is developing its Artificial Intelligence and Data Act (AIDA)
  • The US has various state-level initiatives and sector-specific regulations
  • China has its own AI governance framework
  • Each jurisdiction has different requirements, timelines, and enforcement mechanisms

Industry Fragmentation:

  • Healthcare AI follows different standards than financial AI
  • Automotive AI operates under separate frameworks from social media AI
  • Military and defense AI has classified governance structures
  • No unified cross-industry security standards exist

Technical Fragmentation:

  • Different AI architectures require different security approaches
  • Machine learning security differs from deep learning security
  • Generative AI introduces unique vulnerabilities distinct from predictive AI
  • No standardized security testing or certification process

This fragmentation creates the same problems we see in IoT:

  • Security gaps between different frameworks
  • Compliance complexity for organizations operating across jurisdictions
  • Inconsistent protection for individuals
  • Difficulty establishing accountability when things go wrong

The Critical Questions We Must Answer Now

As cybersecurity professionals, business leaders, and members of a society increasingly governed by AI systems, we need to confront these questions honestly:

1. Will we establish robust AI governance frameworks before widespread deployment?

Or are we already past that point? Many would argue we’ve already missed this window. AI systems are already deployed at scale. The question becomes: Can we retrofit governance onto systems already in production, or do we need to fundamentally rethink our approach?

2. Can we learn from IoT’s fragmented security landscape?

We have a clear case study in IoT of what happens when governance lags behind deployment. The question is whether we have the institutional will to apply those lessons. So far, the evidence is mixed.

3. How do we balance innovation speed with safety protocols?

This is the eternal tension in technology development. Move too slowly, and you stifle innovation and economic competitiveness. Move too quickly, and you deploy dangerous systems that harm people. Where’s the right balance? And who gets to decide?

4. Who is accountable when AI systems fail or cause harm?

When an autonomous vehicle makes a fatal error, who’s responsible? The manufacturer? The software developer? The training data provider? The owner who provided problematic training examples? The regulatory body that approved it?

IoT never clearly answered this question. We can’t afford the same ambiguity with AI.

5. Can we develop security frameworks that work across AI architectures?

Unlike IoT devices, which are physical objects with defined interfaces, AI systems are dynamic, learning, and evolving. They don’t just execute code—they modify their own behavior based on new data. Traditional security frameworks weren’t designed for this. Do we need entirely new paradigms?

What Different Approaches to AI Governance Look Like

The good news is that we’re not starting from zero. Various approaches to AI governance are emerging, each with different strengths and limitations:

The Regulatory Approach (EU Model)

The European Union’s AI Act represents the most comprehensive regulatory framework to date. It categorizes AI systems by risk level and imposes requirements accordingly:

  • Unacceptable risk: Banned outright (e.g., social scoring systems)
  • High risk: Strict requirements for transparency, human oversight, and security
  • Limited risk: Transparency obligations
  • Minimal risk: No specific requirements

Strengths: Clear rules, mandatory compliance, consumer protection
Challenges: May slow innovation, enforcement complexity, geographic limitations

The Industry Self-Regulation Approach (US Model)

The United States has largely favored industry-led initiatives and sector-specific regulations rather than comprehensive federal AI legislation.

Strengths: Flexibility, industry expertise, faster adaptation
Challenges: Inconsistent standards, potential conflicts of interest, limited enforcement

The Principles-Based Approach (Canadian Model)

Canada’s approach emphasizes high-level principles (fairness, transparency, accountability) with guidance rather than rigid rules.

Strengths: Adaptable to rapid technological change, encourages responsible innovation
Challenges: May lack enforcement teeth, interpretation varies, potential for gaming

The Technical Standards Approach

Organizations like NIST (National Institute of Standards and Technology) are developing technical frameworks for AI security and risk management.

Strengths: Technically rigorous, industry-agnostic, actionable
Challenges: Voluntary adoption, may lag behind AI development

Each approach has merit. The question is whether we can integrate them into a coherent framework—or whether we’re destined for the same fragmentation we see in IoT.

The Unique Security Challenges AI Introduces

It’s worth understanding why AI security is fundamentally different from traditional cybersecurity or even IoT security:

1. Model Poisoning and Adversarial Attacks

Attackers can manipulate AI behavior by corrupting training data or crafting inputs designed to fool models. This isn’t about exploiting code vulnerabilities—it’s about exploiting how AI learns.

2. The Black Box Problem

Many advanced AI systems, particularly deep learning models, are essentially black boxes. Even their creators can’t fully explain how they arrive at specific decisions. How do you secure something you don’t fully understand?

3. Emergent Behavior

AI systems can exhibit behaviors that weren’t explicitly programmed and weren’t anticipated by their creators. These emergent properties can create security vulnerabilities that don’t exist in traditional software.

4. The Dynamic Threat Surface

Unlike traditional software that remains static until updated, AI systems continuously evolve as they learn from new data. The threat surface is constantly changing, making traditional security testing inadequate.

5. Data Dependency

AI security isn’t just about securing code—it’s about securing data pipelines, training datasets, and the entire machine learning lifecycle. A breach anywhere in this chain can compromise the entire system.

6. Explainability and Auditability

When something goes wrong with traditional software, you can trace the bug. With AI, the decision-making process may be opaque, making it difficult to identify when a system has been compromised or is behaving incorrectly.

What Businesses Need to Do Right Now

If you’re a business deploying or considering AI systems, you can’t wait for perfect governance frameworks. Here’s what you should be doing today:

Immediate Actions:

1. Conduct AI Risk Assessments

Before deploying any AI system, assess:

  • What decisions will the AI make?
  • What are the potential consequences of errors?
  • What data will it process and learn from?
  • What security vulnerabilities could be exploited?
  • What regulatory requirements apply?

2. Implement AI-Specific Security Controls

Traditional security isn’t enough. You need:

  • Data validation and sanitization for training datasets
  • Adversarial testing to identify potential manipulations
  • Continuous monitoring for unexpected AI behavior
  • Human oversight for high-stakes decisions
  • Audit trails for AI decision-making processes

3. Establish Clear Accountability

Define who is responsible for:

  • AI system security
  • Training data quality
  • Model behavior monitoring
  • Incident response when AI systems fail
  • Ongoing governance and compliance

4. Document Everything

Maintain comprehensive documentation of:

  • Training data sources and characteristics
  • Model architecture and decision-making logic
  • Testing and validation procedures
  • Security measures and controls
  • Incident response plans

Longer-Term Strategic Actions:

5. Develop Internal AI Governance Frameworks

Don’t wait for external regulations to be finalized. Establish your own internal standards for:

  • When AI is appropriate to use
  • Required security and testing standards
  • Human oversight requirements
  • Transparency and explainability standards

6. Invest in AI Security Expertise

Build or acquire expertise in:

  • Machine learning security
  • Adversarial AI techniques
  • AI risk assessment methodologies
  • Regulatory compliance across jurisdictions

7. Participate in Industry Standards Development

Engage with organizations developing AI security and governance standards. Your real-world experience can help shape more practical and effective frameworks.

8. Plan for Regulatory Evolution

AI regulations will continue to evolve. Build systems with flexibility to adapt to new requirements without complete overhauls.

The Stakes: Why This Matters More Than IoT

Some might argue I’m being alarmist. After all, we survived the IoT security crisis. Markets adjusted. Some standards emerged. Life went on.

But AI is different in critical ways:

Scale of Impact: IoT vulnerabilities typically affect specific devices or networks. AI vulnerabilities can affect entire populations simultaneously. A flawed healthcare AI could impact millions. A compromised financial AI could trigger market crashes.

Autonomy: IoT devices execute programmed instructions. AI systems make independent decisions. The potential for unintended consequences is fundamentally greater.

Speed: AI operates at machine speed. A compromised AI can make thousands of harmful decisions in the time it takes a human to recognize there’s a problem.

Opacity: When an IoT device fails, it’s usually obvious. When an AI system fails, it may be making biased or incorrect decisions for months before anyone notices.

Systemic Risk: AI systems are increasingly interconnected. An AI in one system might train on outputs from another AI, creating cascading vulnerabilities across entire ecosystems.

Simply put: We can’t afford to take twenty years to get AI governance right.

A Call to Action for the Cybersecurity Community

We have a responsibility—and a unique window of opportunity.

Unlike IoT, where security professionals were often brought in after deployment to clean up messes, we have a chance to shape AI security and governance proactively. But that window is closing rapidly.

Here’s what cybersecurity professionals need to do:

  1. Advocate loudly for comprehensive AI security frameworks before another major incident forces reactive policy-making
  2. Share knowledge and collaborate across organizations and borders. AI security can’t be proprietary—the risks are too systemic
  3. Push for transparency and explainability in AI systems, even when it’s commercially inconvenient
  4. Demand accountability mechanisms before deploying AI systems in high-stakes environments
  5. Educate business leaders about AI security risks before they’re pressured to deploy systems they don’t fully understand
  6. Contribute to standards development through organizations like IEEE, NIST, ISO, and industry groups
  7. Test and red-team AI systems with the same rigor we apply to critical infrastructure
  8. Refuse to rubber-stamp insecure AI deployments even under business pressure

The Path Forward: Learning from Mistakes

IoT taught us painful lessons about the cost of prioritizing innovation over security and governance. We’re now living with the consequences: fragmented standards, persistent vulnerabilities, and an ecosystem that will take decades more to secure properly.

We have a choice with AI. We can repeat those mistakes—deploy rapidly, deal with consequences later, spend decades retrofitting security into fundamentally flawed systems. Or we can do something different.

Doing something different doesn’t mean halting AI development. It means:

  • Developing security and governance frameworks in parallel with AI capabilities, not as an afterthought
  • Establishing clear accountability before systems are deployed at scale
  • Creating unified international standards rather than fragmented regional approaches
  • Investing in AI security research at the same level we invest in AI capabilities
  • Building explainability and auditability into AI systems from the ground up
  • Requiring rigorous testing before deploying AI in high-stakes environments

The technology community, policy makers, and businesses need to work together—not sequentially, but simultaneously.

Final Thoughts: The Question We Must Answer

The parallel between IoT and AI governance isn’t perfect, but it’s instructive. Both involve rapidly deployed technologies with profound security implications. Both face challenges of fragmented standards and reactive policy-making. Both create risks that extend beyond individual organizations to affect entire societies.

The question isn’t whether AI will face governance challenges—it already is. The question is whether we’ll apply the lessons from IoT’s security crisis or repeat them.

Twenty years from now, will we look back at 2024 the same way we now look back at the early 2000s and IoT—with regret at missed opportunities and preventable failures? Or will we look back and recognize this as the moment when the technology community, policy makers, and businesses came together to get ahead of the risks?

The answer depends on choices we make right now.

As cybersecurity professionals, we have both the expertise and the responsibility to push for comprehensive AI security standards today—not after incidents force our hand.

What’s your take? Are we moving fast enough on AI governance, or are we heading toward another two decades of fragmented security? More importantly, what are you doing in your organization to address these challenges?

The conversation needs to happen now. The decisions need to be made now. The frameworks need to be built now.

We’ve seen this movie before. Let’s write a different ending this time.


About the Author

Jay Patel is the founder of BySecIoT, a cybersecurity firm specializing in IoT security for Canadian small businesses. As an international student turned security professional, Jay combines technical expertise with a passion for making cybersecurity accessible and practical for organizations of all sizes. Having worked extensively with IoT security challenges, he now focuses on helping businesses navigate the evolving landscape of connected device security and emerging AI risks.


Related Resources

Want to assess your organization’s IoT security? BySecIoT offers comprehensive vulnerability assessments and security audits for businesses concerned about their connected device security.

Concerned about AI security in your organization? Contact us to discuss how emerging AI governance frameworks might affect your business and what steps you should take now.

📧 contact@byseciot.ca
🌐 byseciot.ca
📍 London, Ontario | Serving Canadian Businesses


Have thoughts on AI governance and security? Drop a comment below or reach out—I’m always interested in hearing different perspectives on these critical challenges.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top