Gulf AI Chip Deals: We’re Offshoring the Riskiest Tech in History

Gulf AI Chip Deals: We’re Offshoring the Riskiest Tech in History

I just finished writing about how Ukraine’s entire military operational architecture depends on one private company’s satellite network, and how that’s an extremely expensive lesson in single points of failure. Then I look at this week’s news and find out that the US has been enthusiastically creating a different category of single-point risk — this time at a civilisational scale — by shipping the most powerful AI chips ever made to Gulf states that simultaneously maintain deep economic ties to China, are currently on fire (literally, in some cases), and whose AI governance frameworks are described by the Carnegie Endowment as having “significant gaps.” But please, let’s hand them compute capacity larger than every other AI project in the world combined.

What’s Actually Being Agreed To

Let me lay out the numbers, because the scale of this is not obvious from the headlines.

Per Carnegie Endowment, Just Security, and CNBC reporting: the Trump administration has authorised and is expanding sales of advanced Nvidia AI chips to two Gulf state-backed entities — HUMAIN of Saudi Arabia and G42 of the UAE. The Commerce Department approved an initial 35,000 Blackwell GB300 chips for each in November 2025. That’s the floor. The UAE’s G42, in partnership with US companies, is building a 5 gigawatt AI campus in Abu Dhabi that could ultimately house up to 2.5 million Nvidia B200s — the most advanced chips Nvidia makes. For context: Elon Musk’s Tennessee-based Colossus, currently the world’s largest AI cluster, has 200,000 high-end chips. G42 is planning more than twelve times that. Saudi Arabia’s HUMAIN is building toward 1 gigawatt of compute capacity with an initial order of 18,000 chips as part of a broader $600 billion deal with the United States.

Carnegie Endowment fellow Alasdair Phillips-Robins summarised the scale plainly: “We’re talking about something larger than any AI training system that exists in the world today.” A system of this scale, as Time Magazine’s analysis notes, could in principle “synthesize automated cyber-attacks, intelligence collection, and weapons development.”

The justification from the Trump administration is straightforward and not entirely without merit: if the Gulf states don’t get chips from the US, they’ll get them from China. Proponents like White House AI Adviser David Sacks argue this prevents countries like Saudi Arabia and the UAE from building their AI stack on Huawei hardware and locking into the Chinese technology ecosystem. The UAE did, under pressure for the first deal, agree to “far-reaching measures” including replacing its Chinese-developed AI stack and divesting Chinese tech holdings.

But here’s the thing. As Bloomberg’s analysis flatly states: the UAE entity at the centre of the largest chip deal — G42 — is chaired by Sheikh Tahnoon Bin Zayed Al Nahyan, who is simultaneously the UAE’s national security adviser. The chairman of the AI company receiving the world’s most powerful chips is the country’s intelligence chief. The “rigorous security and reporting requirements” the Commerce Department attached to the approval are, per Carnegie’s analysis, not publicly disclosed. What those requirements actually require is unknown. The safeguards for Saudi Arabia, the Carnegie analysis noted, are “still unknown.”

We are offshoring the most powerful AI training infrastructure in human history to state-controlled entities in autocratic states, with secret governance requirements, during an active regional war.

The Cyber Security Dimensions Nobody’s Discussing

Here is where I want to get specific about the attack surfaces this creates, because the coverage treats this as a geopolitics and trade story when it has direct cyber security implications.

Dimension one: Chip diversion and dual-use risk. The US has export controls on advanced chips specifically because these chips can be used for nuclear weapons modelling, sophisticated cyber attack synthesis, autonomous weapons development, and advanced intelligence capabilities. The fear, as the Carnegie analysis documents, is that chips exported to the Gulf could find their way to China — either through investment flows from Gulf sovereign wealth funds into Chinese AI companies, through data centre access arrangements, or through the Gulf states’ extensive economic relationships with Beijing. Both Saudi Arabia and the UAE have deep ties with China. Both have conducted joint military exercises with China. China’s 15th Five-Year Plan explicitly targets technology self-reliance, and Chinese AI chip manufacturer Huawei’s Ascend platform is the alternative the US is trying to prevent Gulf states from adopting. The question is whether the Gulf states will maintain the US-aligned position if geopolitical winds shift, or whether access to US chip infrastructure creates leverage that runs in both directions.

Dimension two: The AI sovereignty attack surface. Building world-scale AI infrastructure in Gulf states creates a new category of critical infrastructure that is simultaneously a cyber target and a geopolitical leverage point. Per the AI News analysis, “the diversified application of these chips in everyday utilities and defense systems could make the UAE susceptible to espionage or cyber-attacks if US-level security measures are not meticulously implemented.” AI data centers of the scale being built in the Gulf are high-value targets for nation-state espionage, supply chain attack, hardware implants, and cyber intrusion. The physical security of these facilities, the security of the supply chains delivering hardware to them, the operational security of the staff running them, and the cybersecurity of the data and models they process all represent attack surfaces that didn’t previously exist at this scale in these jurisdictions.

Dimension three: US company dependency on Gulf compute. Just Security’s analysis highlights a risk that most coverage misses entirely: “You could end up in a position where some large proportion of US computing power has been offshored to a bunch of states that can wield leverage over US foreign policy.” If significant proportions of US AI compute capacity — for inference, for training, for enterprise AI operations — end up physically located in Gulf states, those states acquire leverage over US technology companies that currently don’t exist. The geopolitical leverage flows in both directions. This is the enterprise version of the Ukraine-Starlink problem: strategic dependence on infrastructure controlled by an external party with interests that may not always align.

Dimension four: AI model security in politically sensitive jurisdictions. AI models trained on data centers in Gulf states will be subject to the legal and political frameworks of those states. What data can be processed? What outputs can be generated? What access does the host government have to training data and model weights? These questions are not theoretical for AI companies running operations in G42 or HUMAIN data centers. The CSET Georgetown analysis is explicit about the gaps: “significant gaps remain, including governance frameworks, oversight mechanisms, and approaches to AI sovereignty.” Without public disclosure of the security requirements attached to these chip deals, there is no way for the broader market to assess whether adequate protections exist.

This connects to what my research on the Quantum Threat to National Security identified as a core principle: the security of advanced technology infrastructure is determined not just by the technology itself but by the governance and oversight frameworks controlling it. Quantum-capable systems in adversary hands are a national security threat. AI training clusters of unprecedented scale in jurisdictions with opaque oversight and dual-use potential are a risk category of the same order of magnitude.

And the Fourth Turning analysis I published covers this specific dynamic: the convergence of technological capability distribution with geopolitical crisis cycles creates conditions in which technology that seemed safely deployed under one set of geopolitical conditions becomes a threat multiplier when those conditions change.

What Went Wrong — Root Cause

The root cause is a tension that nobody in the current policy debate has resolved: the argument for chip exports is geopolitically coherent (if Gulf states go to Huawei, the US loses strategic influence and China wins the AI race), but the security requirements attached to those exports are secret, incomplete, and unverified by any independent body. The result is that the US is making a strategic bet — that Gulf state alignment is durable, that chip diversion won’t occur, that the AI capabilities developed will not be used in ways contrary to US interests — without the verification mechanisms that would allow anyone to confirm the bet is paying off.

The Biden administration’s AI diffusion rule tried to address this with a tiered framework. The Trump administration rescinded it and is building a replacement, but that replacement is not yet in place, which means the largest AI chip deals in history are being approved under a regulatory framework in transition. As CSET’s analysis notes: “translating the Saudi and Emirati transactions into a durable competitive advantage will require bridging governance gaps by formalising oversight mechanisms.”

In cybersecurity terms, this is the equivalent of giving a third party admin access to your crown jewels systems on the basis that you trust them, without implementing any technical controls to verify that the access is being used appropriately, monitoring what they’re doing with it, or having a revocation mechanism if the relationship sours. We have no technical visibility into what these AI data centers are actually being used for. We have secret governance requirements that we’re trusting — not verifying — are being met.

The Fix — Fixer’s Advice

I want to be honest here: most of what needs fixing is at the policy level, not the enterprise practitioner level. But there are enterprise-relevant dimensions.

For policy makers and the industry bodies that influence them:

The Carnegie Endowment analysis outlines the specific requirements that would make these deals manageable. Publish the security requirements publicly. Create a standardised timeline for hardware delivery with verified compliance checkpoints. Establish independent audit mechanisms for chip use and data center access. Restrict Gulf sovereign wealth fund investments in Chinese AI and semiconductor sectors as a condition of chip access. These are concrete, implementable conditions that would allow the strategic benefits of the deals to be captured while the risks are actually managed rather than assumed away.

The comparison the Carnegie analysis makes is instructive: “The safeguards to which Saudi Arabia has agreed are still unknown.” That is not a security framework. That is a prayer.

For enterprises considering building AI workloads on Gulf state infrastructure:

Conduct a supply chain and geopolitical risk assessment before committing significant AI workloads to data centers in Saudi Arabia or the UAE. The relevant questions: What is your data residency requirement? What data protection laws apply to your data in these jurisdictions? What access does the host government have to data processed in these facilities? What is your contingency if the geopolitical relationship between the US and these states changes? What happens to your AI operations if the US Commerce Department determines that the chip export conditions have been violated and restricts further exports?

These are not hypotheticals. The Iran-Israel war that erupted three weeks ago has directly hit Gulf state infrastructure. The Ras Laffan LNG complex — in the same country hosting a major US-aligned AI data centre (Qatar) — was severely damaged by Iranian missiles. The same conflict that created the case for aligning Gulf states with the US tech stack through chip deals also created conditions in which the Gulf states are actively being bombed.

For security practitioners in AI and cloud infrastructure:

The data center physical security and cybersecurity requirements for facilities housing world-scale AI compute are not well-established in international practice. If you’re responsible for security in a facility housing hundreds of thousands of Nvidia B200s, you are dealing with a target of extraordinary value to nation-state adversaries interested in AI capabilities, semiconductor intelligence, training data, and model weights. Hardware supply chain security — verifying the integrity of chips and networking equipment before deployment — is critical. Physical security at the data center level needs to meet defence-grade standards. Insider threat programmes matter enormously given the intelligence value of these facilities.

Monitor the CSET, Carnegie Endowment, and Just Security analyses of these deals closely as the governance frameworks develop. And push — through industry bodies, through government engagement, through public comment processes — for public disclosure of the security requirements attached to these deals. Secret security requirements are not a security framework. They are a fig leaf.

Final Word

The Gulf AI chip deals are happening. The compute is going. The question is whether the governance frameworks will catch up to the strategic bets being made before the bets come due. Given the current state of the region — active war, damaged energy infrastructure, Iranian targeting of Gulf tech company offices — “we trust the governance is fine” is not a risk posture I’m comfortable with. Track this one closely… it’s gonna be fun!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.