Clop Breached MSG via Third-Party Oracle EBS: 131K SSNs Gone

Clop Breached MSG via Third-Party Oracle EBS: 131K SSNs Gone

I had barely finished my write-up on the Marquis vs. SonicWall disaster — where a firewall vendor’s own backup service handed ransomware gangs the keys to a fintech company’s network — and I was sitting here telling myself that at least we had a lawsuit, at least someone was trying to hold a vendor accountable, when this lands in my inbox. Madison Square Garden. Clop. Oracle EBS. A third-party managed platform. A hundred and thirty-one thousand people who went to a Knicks game or a concert at Radio City Music Hall and are now dealing with their Social Security numbers being in criminal hands. And the kicker? The breach happened in August 2025. MSG confirmed it in March 2026. Seven months. Seven months of people walking around with compromised identities and no notification. I genuinely don’t know what’s worse — the breach itself or the response to it.

What Actually Happened

According to SecurityWeek (March 2, 2026), UpGuard (updated March 2026), and Comparitech (February 26, 2026), Madison Square Garden Entertainment — the company operating Madison Square Garden, Radio City Music Hall, the Beacon Theatre, and the Sphere in Las Vegas — has confirmed a data breach affecting 131,070 individuals. The attack vector: a zero-day vulnerability in Oracle’s E-Business Suite (EBS), a widely-deployed enterprise resource planning platform. That Oracle EBS instance was not run by MSG directly — it was hosted and managed by an unnamed third-party vendor.

The Clop ransomware and extortion group executed this as a “one-to-many” campaign: they discovered the Oracle EBS zero-day, weaponised it, and simultaneously hit over 100 organisations before Oracle could patch, before anyone knew what was being targeted, and before a single security team had time to apply mitigations. MSG was one of more than a hundred victims caught in the same net at the same time. Clop claimed credit in November 2025. MSG declined to comment publicly at that point. Then, per Maine Attorney General filings in early 2026, notification letters started going out — initially covering 38,393 Maine residents, with the full confirmed total reaching 131,070 across all states.

The data stolen: full names, physical addresses, and Social Security numbers. The identity theft trifecta. Name, address, SSN is enough to open lines of credit, file fraudulent tax returns, apply for government benefits under someone else’s identity, obtain medical care billed to the victim’s insurance, and cause a decade of financial damage that the victim has to unwind themselves. These aren’t abstract harms in a press release. There are 131,070 specific people — most of whom almost certainly don’t think of themselves as data breach victims because they bought concert tickets, not sensitive financial products — who are now permanently at elevated identity theft risk.

Clop’s One-to-Many Business Model — How It Works and Why It Keeps Working

Clop — or Cl0p if you’re into the stylistic choices of ransomware crews — has been running this playbook in some form since at least 2021. The pattern is consistent enough that you can describe it like a business model, because that’s exactly what it is.

They find a zero-day — or near-zero-day — in widely-deployed enterprise software. File transfer platforms. ERP systems. Any software that’s installed at scale across hundreds or thousands of enterprise environments simultaneously. They don’t hit one target. They hit all of them at once, before the vendor patches, before defenders know what’s being targeted, before any security team has the opportunity to apply mitigations. They exfiltrate data. They publish victim names on their leak site on a rolling schedule. They demand payment to suppress the data or delay publication.

GoAnywhere MFT in early 2023: 130-plus organisations compromised simultaneously. MOVEit Transfer in May-June 2023: 600-plus organisations, tens of millions of individuals. Oracle EBS in August 2025: over 100 confirmed victims. Same playbook. Same group. Different platform. And here’s the thing that makes this particularly infuriating: Clop frequently skips encryption entirely. They don’t want to be in the ransomware-as-ransomware business with all the operational complexity of deploying encryptors, managing decryption keys, and negotiating with victims who may or may not pay. Pure extortion — exfiltrate, threaten, publish, collect — is simpler and apparently more profitable.

As I’ve tracked in my research on how dark web monetisation infrastructure and extortion economics have evolved, the “find a shared platform, hit it at scale” approach is the logical commercial evolution of extortion economics. You don’t get a hundred times the effort for a hundred targets. You get approximately the same R&D effort for the zero-day discovery and exploit development, and then you scale it horizontally to every organisation running the vulnerable software. It’s a genuinely effective business model, and the repeated success of this specific crew against this specific category of target demonstrates that enterprise organisations have not adjusted their defensive posture to account for it.

The Third-Party Vendor Problem, Again

MSG’s Oracle EBS instance was, per their own breach notification language, “hosted and managed by a third-party vendor.” That vendor is unnamed. Let me dwell on that for a second. A hundred and thirty-one thousand people’s SSNs were compromised through a vendor that MSG has not named publicly. The victims don’t know which vendor lost their data. They can’t assess that vendor’s other clients or their broader risk exposure. They just get a letter telling them their SSN was stolen and here’s a free year of credit monitoring. Thanks for coming to our arena.

The operational reality here is that MSG had no meaningful visibility into the security posture of the platform managing their employee and customer data. They didn’t control the patching timeline for Oracle EBS. They didn’t have operational telemetry from the managed environment. They trusted the vendor to handle it. And when Clop hit the Oracle EBS zero-day, MSG found out the same way everyone finds out about Clop victims: from Clop’s own leak site announcement in November 2025.

This is structurally identical to what I documented in the Marquis vs. SonicWall case: customer organisation outsources management of a platform, vendor has a security failure, customer organisation bears the breach notification obligation, the regulatory fines, the reputational damage, and the class action exposure. The vendor gets named in a lawsuit. The 131,000 people with stolen SSNs are stuck managing the consequences for the next decade.

The seven-month notification delay deserves specific attention, because it’s not just a PR failure — it’s a legal one. Most US state breach notification laws require notification to affected individuals within 30 to 90 days of discovery. Seven months is not within 30 to 90 days. Clop claimed credit in November 2025, which means MSG had formal external confirmation of the breach at that point even if they hadn’t confirmed it internally earlier. The gap between November 2025 and March 2026 Maine AG filings is approximately four months of people sitting unknowingly exposed while the notification machinery ground through legal review, drafting, and approval processes. I understand organisations need time to investigate. I do not understand four months of additional delay after a ransomware crew publicly named you as a victim on their own leak site.

What Went Wrong at the Root

Let me give you the root cause analysis, because “Clop hit an Oracle zero-day” is not a complete answer.

The Oracle EBS zero-day is ultimately Oracle’s responsibility to patch — and they eventually did. But the window between “Clop starts exploiting” and “Oracle patches” is measured in days to weeks. What happens in that window is determined by the architecture decisions made long before the exploit existed.

First: managed platform without contractual security SLAs. If MSG’s contract with their Oracle EBS vendor didn’t include specific requirements for patch application timelines, emergency patch procedures, security monitoring, breach notification timelines, and regular security assessments, then MSG had a business relationship with undefined security expectations. That is a procurement failure predating this incident by years.

Second: no secondary monitoring. A managed platform managed by a third party does not mean you get zero visibility. Organisations can contractually require log export, SIEM integration, or at minimum periodic security posture reporting from managed platform vendors. If the Oracle EBS environment was a complete black box to MSG’s security team, that is a contractual architecture choice that directly contributed to a seven-month notification gap.

Third: the concentration risk of shared platforms. Running your ERP on a vendor-managed platform shared with other clients means that when a Clop-calibre threat actor finds a zero-day in that platform, your exposure is not a function of your own security posture. It’s a function of Oracle’s patch timeline and your vendor’s deployment speed. This is worth being explicit about when making platform decisions, because the risk model is fundamentally different from self-hosted infrastructure.

The Fixer’s Advice — What You Do About Third-Party Platform Risk

Here is what a real vendor risk management programme looks like for managed platforms handling sensitive data. Not the checkbox version. The version that would have changed outcomes here.

1. Contractual security requirements with teeth. Every managed platform contract for any service handling PII, financial data, or health data should include: maximum patch application timelines for critical vulnerabilities (72 hours for CISA KEV-listed CVEs, 30 days for high-severity), mandatory breach notification to your security team within 24 hours of confirmed or suspected compromise, right to audit or require third-party security assessments on defined cycles, and liability provisions for vendor security failures. Negotiate these before signing. If a vendor won’t accept them, that tells you something important about their security culture.

2. Minimum visibility requirements. You should have log export — authentication logs, access logs, configuration change logs — flowing from every managed platform into your own SIEM, in near-real time. Not batch exports. Not weekly reports. Real-time telemetry. If a vendor claims this isn’t technically possible, that’s a red flag. If they claim it would cost extra, pay for it or find a vendor who includes it. Flying blind on a platform handling your employees’ SSNs is not an acceptable operational posture.

3. Vendor patch monitoring. Your vendor risk team should be tracking CVEs affecting every vendor-managed platform as actively as they track CVEs in your own environment. When Oracle publishes a security advisory for EBS, someone on your team should know within 24 hours and should be asking your managed service provider: what is your planned patch timeline, and what interim mitigations are in place? Don’t wait for them to tell you.

4. Data minimisation. Does your Oracle EBS instance actually need to store 131,070 individuals’ SSNs? Work backward from the worst-case breach scenario: what data, if stolen from this platform, would cause the most harm? Then ask whether all of that data actually needs to live there, in plaintext, accessible from the managed environment. Data you don’t store can’t be breached. Tokenisation, pseudonymisation, and minimum-necessary data collection policies reduce breach impact when — not if — a managed platform gets hit.

5. Incident response planning that includes third parties. Your IR plan should have a specific playbook for “managed vendor reports potential breach” or “external source (including attacker leak site) indicates our data may be compromised via a vendor.” How do you escalate? Who has authority to demand immediate access to vendor forensic data? Who decides whether to notify regulators before the vendor has completed their own investigation? These decisions made in the chaos of an active incident are slower and worse than decisions made in advance. Write the playbook now.

6. Notification readiness. Seven months is legally indefensible in most US states. If your breach notification process — legal review, regulatory requirements mapping, individual notification drafting, mailing logistics — takes more than 60 days from confirmed breach, you are going to be in violation of multiple state notification laws. Map your state-by-state obligations now. Build the notification template. Establish the legal review process. Know which states require what timelines before a breach happens, not after.

The Clop pattern is not going away. They have a proven, profitable, repeatable model and they will keep running it against shared enterprise platforms until the economics change. As I wrote in my analysis of why a Cyber 9/11 remains closer than anyone wants to admit, the concentration of sensitive data in shared platforms managed by third parties represents exactly the kind of systemic single point of failure that amplifies attacker leverage. Clop isn’t hacking one company at a time. They’re hacking the trust architecture of outsourced enterprise IT, and they’re doing it efficiently.

Fix your contracts. Get your logs. Know what data you’re handing to vendors and what you’re owed in return when it goes wrong.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.