Right. I’ve already written today about Iran’s internet going dark, North Korea flooding npm with spyware, and a Chrome zero-day that lets extensions hijack your AI assistant… actually I’m still writing on that – stay tuned. I was going to take a break and refill my coffee when I saw Fastly’s fourth annual Global Security Research Report drop on March 1st and I thought — no. No, I can’t let this one slide, because it’s the meta-story behind all the other stories.
The headline from the Fastly report: AI-first businesses are hurtling towards a cybersecurity crisis by failing to modernize their security at the same rate as their AI adoption. Sixty-nine percent of Southeast Asian respondents said AI was a contributing factor in their most recent cybersecurity incident. Sixty-nine percent. And the same pattern holds globally.
This is the thing I’ve been angry about since the AI hype cycle hit security: everybody bought the “AI improves your security posture” marketing pitch while simultaneously introducing agentic workflows, decentralized data flows, and AI inference infrastructure into their environments without the security controls to match. You brought a new attack surface in through the front door and called it innovation.
What Fastly Actually Found
Per CRN Asia’s coverage of the report and Fastly’s own release, the core finding is brutal: companies integrating AI into key processes — what Fastly defines as “AI-first businesses” — are paying a specific price for the gap between AI adoption speed and security modernization speed. Specifically:
- Longer recovery times after breaches
- Higher breach costs
- Expanding attack surfaces driven by agentic workflows and decentralized data flows
Marshall Erwin, CISO at Fastly, said it clearly: “The speed of AI adoption is reshaping security infrastructure almost overnight.” His prescription: “Modernize security at the same rate” as AI adoption. Which sounds obvious. And yet.
Rachel Ler, Fastly’s AVP for Asia, pointed to a specific emerging problem: shadow AI. Employees using AI tools that IT hasn’t approved, hasn’t inventoried, and hasn’t secured. In 2018 we called it shadow IT. Now it’s shadow AI, and it’s potentially worse because AI tools often have deep integrations with data sources, generate outputs that get embedded into workflows, and may be trained on or have access to sensitive internal data.
The report also highlights AI scraper activity as a cost driver — AI crawlers hammering infrastructure at a scale that’s pushing operational disruption and costs into six-figure territory. This is the part that doesn’t get enough attention: your infrastructure is being consumed by other people’s AI training pipelines, and that has direct security and operational implications.
Why This Pattern Is Familiar and Infuriating
I’ve seen this exact movie before. Multiple times. Different technology, same script.
Cloud happened. Organisations rushed to move workloads to AWS, Azure, GCP without the security expertise or controls to match. The result: misconfigured S3 buckets, exposed cloud credentials, publicly visible API keys. I wrote about exactly this failure mode in the context of a threat intelligence firm leaking credentials via an exposed AWS bucket. A threat intelligence firm. The people whose entire job is knowing better.
Mobile happened. Organisations deployed BYOD policies without MDM, without app controls, without thinking about what happens when a personal device with corporate data on it gets lost or handed to a teenager.
Now AI is happening. And the pattern is identical: move fast, ship features, worry about security later. “Later” is apparently now.
The difference with AI is the blast radius. Cloud misconfigurations exposed data that was sitting in a bucket. AI misconfigurations can expose data that’s actively being processed, synthesized, and outputted across multiple workflows simultaneously. An agentic AI workflow that has access to your CRM, your email, and your financial systems and that’s been compromised is a fundamentally different problem than a misconfigured S3 bucket.
The IBM X-Force 2026 report findings I covered make this worse: attackers are using AI to accelerate their operations. So you have AI-expanded attack surfaces on the defender side, AI-accelerated attacks on the offensive side, and the gap is widening. This is not a drill.
The Attack Surface Inventory Problem
Here’s the thing that Marshall Erwin’s quote at Fastly is dancing around: most organisations don’t have an accurate inventory of their AI attack surface.
Do you know:
- Which AI tools your employees are using? (Hint: the answer from IT is always fewer than the reality)
- Which AI models have access to which internal data sources?
- What data your AI inference infrastructure is processing, storing, and potentially leaking?
- Which third-party AI APIs your applications are calling, and what data you’re sending to them?
- What your agentic workflows can access and act upon autonomously?
If you can’t answer all of those questions with confidence, you have an undocumented AI attack surface. And undocumented attack surfaces are where breaches happen.
The agentic workflow piece is particularly gnarly. Traditional security thinking is about data at rest and data in transit. Agentic AI creates a new category: data in action — being autonomously processed, synthesized, and acted upon by systems that can make decisions and take actions without human review. If those systems are compromised or manipulated, the damage isn’t limited to data exfiltration. Depending on what your AI agents can do, an attacker who compromises them might be able to take autonomous actions within your environment.
Shadow AI Is The New Shadow IT
In 2015, the Shadow IT problem was “employees using Dropbox instead of SharePoint.” Annoying, somewhat risky, manageable.
In 2026, the shadow AI problem is “employees using Claude or ChatGPT or Gemini to process sensitive internal documents, customer data, and proprietary IP without IT knowing.” That data is going to an external API. It may be used for model training depending on the service’s terms. It’s potentially logging somewhere. And IT has no visibility into any of it.
The CrowdStrike 2026 threat report’s 27-second breakout time finding means that when an attacker gets a foothold in your environment, they move laterally faster than any human team can respond. If they get into an AI-connected system with broad data access, that 27-second window is even more catastrophic.
This connects to the sociotechnical dimension that I keep hammering on — it’s not just the technology that’s broken, it’s the human clusterfuck in cybersecurity that sits underneath all of it. Employees adopting shadow AI aren’t doing it to be malicious. They’re doing it because it makes their work easier. And IT security teams screaming “no AI tools” are going to lose that battle every single time, because the productivity gains are real. The answer is governance, not prohibition.
What You Actually Need To Do
Build your AI inventory now. This doesn’t require a tool purchase. Start with a survey to department heads: what AI tools is your team using? You’ll be surprised. Then cross-reference with network logs — what external AI APIs are being called from your network? Okta SSO logs? App store installs on managed devices?
Classify your AI data flows. For every AI integration you find: what data is going in? What could come out? What’s the retention policy of the provider? If sensitive data — customer PII, health data, financial records, IP — is going into an AI tool your security team hasn’t reviewed, that’s a gap.
Secure your AI inference infrastructure like production servers. If you’re running models internally, treat the inference stack like any other production system: network segmentation, access controls, logging, patching. This infrastructure is often spun up quickly by ML teams who aren’t thinking about security and then forgotten.
Get ahead of agentic workflows before they proliferate. If your organisation is experimenting with AI agents that can take autonomous actions — booking meetings, sending emails, modifying documents, calling APIs — establish a security review process for those workflows before they go to production. What data can the agent access? What can it act upon? What’s the blast radius if it’s compromised or manipulated?
Address the basics first. Erwin’s quote about modernizing security at the same rate as AI adoption is right, but it needs a corollary: if your basics aren’t solid — MFA not everywhere, patching lagging, no network segmentation — adding AI governance on top of that foundation is building on sand. The IBM X-Force report that I covered put it bluntly: AI turbocharges attackers while your basics still suck. Fix the basics simultaneously.
The Industry Wake-Up Call Nobody’s Listening To
Fastly’s report, IBM’s X-Force, CrowdStrike’s findings — they’re all saying the same thing from different angles: the gap between how fast organisations adopt new technology and how fast they secure it is where breaches happen. Every single time.
I’ve written for years about why a Cyber 9/11 has always been closer than we admit. The AI adoption wave is creating the largest unplanned expansion of organisational attack surfaces in history, happening faster than any previous technology transition. Faster than cloud. Faster than mobile. And with AI-accelerated attackers on the other side of the equation.
Fastly’s CISO is right. You need to modernize security at the same rate as AI adoption. The question is whether anyone’s actually listening, or whether we’re going to have this conversation again in 12 months while reviewing a stack of breach disclosures from 2026’s AI-enabled catastrophes.
I suspect the latter. Prove me wrong.
