IBM dropped their 2026 X-Force Threat Intelligence Index on February 25th and I have to say — reading through a 50-page threat report first thing in the morning is my version of coffee-fueled rage therapy. Because every single year, the finding is essentially the same: attackers are getting better, faster, and more creative, while defenders are still failing at the stuff that was solved in 2015.
This year’s edition has some genuinely alarming numbers. And some numbers that should not be alarming to anyone who’s been paying attention for more than six months. Unfortunately, the security industry has the collective institutional memory of a goldfish, so here we are.
What the Report Actually Says
According to IBM’s 2026 X-Force Threat Intelligence Index, released February 25th, 2026, here are the headline numbers:
- 44% rise in attacks originating from exploitation of public-facing applications — primarily due to missing authentication controls and AI-driven vulnerability discovery
- 49% surge in active ransomware and extortion groups year-over-year
- Vulnerability exploitation caused 40% of all incidents investigated by X-Force
- Over 300,000 ChatGPT credentials were found exposed in underground forums in 2025
- North Korean IT worker schemes are now using AI-driven image manipulation for synthetic identities and AI translation tools to operate across global marketplaces
The AI piece is the one getting all the headlines, and sure, it’s genuinely concerning. Attackers are using AI to speed up their research cycles, analyze large datasets of potential targets and vulnerabilities, and iterate on attack paths in real time. What used to take a crew of analysts days now takes hours or minutes. The attacker productivity curve has bent sharply upward.
But here’s what I want to focus on, because it’s buried in the report and it’s the part that actually matters for day-to-day security practice: the number one finding from X-Force Red penetration tests is still misconfigured access controls. Not some exotic AI-powered zero-day. Misconfigured access controls.
The Part That Makes Me Want to Throw My Monitor
Let me read that back to you. IBM’s red team — the people paid to break into organizations and document how they did it — found that the most common entry point is misconfigured access controls. In 2026. With AI. With all of the tools, frameworks, best practices, compliance requirements, vendor security products, security conferences, awareness training, and billions of dollars spent on cybersecurity over the last two decades.
We’re still failing at access controls.
You know what misconfigured access control looks like? An admin account with a weak password and no MFA. A service account with domain admin rights that was supposed to have read-only access. An exposed management interface on a firewall that has “admin/admin” as credentials because nobody changed the default. A developer who granted themselves overly broad AWS IAM permissions six months ago and nobody’s reviewed it since.
These aren’t sophisticated failures. These are basic failures. And the reason they persist is exactly what I’ve been writing about for years: organizations treat security as a compliance activity rather than an operational discipline. They implement tools and tick boxes and file reports and then wonder why the red team walked in through the front door.
The 300,000 exposed ChatGPT credentials is another beauty. Those credentials didn’t get stolen through some fancy AI-powered attack. They got stolen through infostealers — commodity malware that’s been around for years — because people are using their corporate credentials to log into AI tools on personal devices and home computers that are running malware. Info stealer, credential exfiltration, credential stuffing into corporate SSO, done. One click on a phishing link on a home laptop, and now your enterprise AI environment is compromised.
The 49% Ransomware Group Surge — What’s Driving It
The 49% year-over-year surge in active ransomware and extortion groups reflects a structural shift in how ransomware operates. Ransomware-as-a-Service (RaaS) has commoditized the attack toolkit to the point where you don’t need to be technically sophisticated to run ransomware campaigns. You need an affiliate relationship with a ransomware group, some initial access brokers, and the willingness to be a criminal.
As I wrote about when Lazarus Group — North Korea’s premier state-sponsored hacking crew — started renting Medusa ransomware to hit US hospitals, the RaaS model has blurred the line between nation-state actors and criminal gangs. State actors are using criminal infrastructure. Criminal groups are operating with state-level sophistication. The categories that helped us think about threat actors have become almost meaningless.
The “ecosystem fragmentation” language IBM uses — where more groups are active but publicly disclosed victim counts are also rising — tells you that the market for ransomware has expanded. More players. More targets. More victims. More disclosed incidents because regulators in multiple jurisdictions now require disclosure. The actual underlying number of ransomware incidents is almost certainly higher than the disclosed figure, because plenty of organizations still pay quietly and tell nobody.
The AI-Acceleration Problem Is Real, But Misframed
The media coverage of this report is going to focus almost entirely on “AI is powering cyberattacks now” and produce fifty breathless articles about ChatGPT being used to write malware. That framing, while not wrong, is also not useful.
AI is doing for attackers what it does for everyone: it’s an accelerator. It speeds up the parts of the attack lifecycle that required human time and attention — research, reconnaissance, analysis, writing convincing phishing content, iterating on exploit code. It doesn’t change the fundamental mechanics of how attacks work. Attackers still need an initial foothold. They still need to exploit a vulnerability or steal a credential. They still need to move laterally and find their target data. They still need to exfiltrate or encrypt.
What changes is the pace. The seventy-two-hour window between initial access and ransomware deployment that was “fast” in 2022 is now potentially measured in hours. The research phase for identifying vulnerable targets that used to require a skilled analyst is now partially automated. The quality of social engineering content that used to require a native-speaking human is now accessible to anyone with an AI API key.
This means your detection and response timelines need to compress to match. The assumption that you have days to respond to an initial alert is now dangerously outdated.
The Fixer’s Advice
IBM’s report essentially tells you the same things I’ve been saying. The fact that a major security vendor’s flagship annual report reaches the same conclusions as a grumpy consultant ranting on a blog should tell you something about how obvious these problems are.
Fix your access controls first. Before you buy another security product. Before you implement another framework. Before you hire another consultant. Run a privilege review right now. How many accounts have more access than they need? How many service accounts have admin rights? How many MFA exceptions have been granted? Start there.
Treat infostealer infections as full breaches. When an endpoint gets hit with an infostealer, the standard response is often to clean the machine and move on. Wrong. Every credential that was in a browser, in a password manager, in a clipboard history, or in a saved application session on that device should be considered compromised. Rotate all of them. Check for session token theft. Treat it like a breach of every system that device had access to.
Compress your response timelines. If your incident response plan has steps measured in “within 24 hours,” recalibrate. In a world where AI is accelerating attacker operations, “within 24 hours” means you’re already ransomed. Your detection-to-containment window needs to be measured in hours, not days. That means automation, pre-approved response playbooks, and authority delegated to security operations teams to act without waiting for three levels of management approval.
Don’t let AI tools become a credential sprawl problem. Your employees are using AI tools — with or without your blessing. If they’re using corporate email addresses and passwords to sign up for every AI product that shows up on Product Hunt, you have an exponentially growing attack surface. Get ahead of this with an approved AI tools list, SSO integration for approved tools, and clear policy on not reusing credentials.
As I wrote in my breakdown of the threat intelligence firm that left 400GB of credentials in an open AWS bucket, credential hygiene is a discipline that has to be enforced institutionally, not just recommended in a policy document.
The Call-Out
The IBM X-Force 2026 report is a well-researched document that will be read by CISOs, used to justify security budget requests, cited in board presentations, and then mostly ignored in terms of the actual operational changes it recommends.
Because the recommendations — fix access controls, improve patch management, compress response times, manage credentials properly — have been the same recommendations for a decade. And they keep getting ignored in favor of buying the next shiny detection product.
Forty-four percent rise in application exploitation. Forty-nine percent rise in ransomware groups. And the top finding from red team engagements is still misconfigured access controls.
Fix the basics. Everything else is noise.
