{"id":770,"date":"2026-03-01T17:32:45","date_gmt":"2026-03-01T16:32:45","guid":{"rendered":"https:\/\/lars-hilse.de\/lhx18\/?p=770"},"modified":"2026-03-01T17:32:46","modified_gmt":"2026-03-01T16:32:46","slug":"hitler-a-theoretical-framework-for-the-decline-of-human-oversight-in-ai-generated-code","status":"publish","type":"post","link":"https:\/\/lars-hilse.de\/lhx18\/2026\/03\/hitler-a-theoretical-framework-for-the-decline-of-human-oversight-in-ai-generated-code\/","title":{"rendered":"HITL&amp;ER &#8211; A Theoretical Framework for the Decline of Human Oversight in AI-Generated Code"},"content":{"rendered":"<div class=\"ttr_start\"><\/div>\n<h2 class=\"wp-block-heading\">The Slow, Inevitable Death of &#8220;Someone Needs to Check the AI&#8217;s Homework&#8221;<\/h2>\n\n\n\n<p>Look, the whole &#8220;human in the loop&#8221; thing in AI-generated code? It&#8217;s dying a gore, horrific death\u2026 only not dramatically, not overnight \u2014 but measurably, and with increasing speed, driven by benchmark data that&#8217;s honestly kind of alarming, real-world deployment numbers, and the simple fact that developers are just\u2026 trusting the machine more and more\u2026 it&#8217;s like watching your child learning to ride a bicycle, and taking the training-wheels off\u2026 There will be injuries; some milder than death.<\/p>\n\n\n\n<p>This paper pulls together (like the flesh of bicycle related injuries when applying sutures) what we currently know to sketch out a four-phase model of oversight decline, figure out what&#8217;s actually slowing it down, and \u2014 perhaps most importantly \u2014 identify the irreducible chunk of governance that no amount of AI capability is going to make go away before 2040.<\/p>\n\n\n\n<p>The core argument: <em>code review<\/em> as a form of HITL is basically heading toward extinction in everyday software domains by the early 2030s. But don&#8217;t pop the champagne yet, because <em>execution review<\/em> \u2014 runtime-level oversight \u2014 is quietly ascending to fill the void.<\/p>\n\n\n\n<p><strong>The legal, ethical, and security-mandated human-in-the-loop? That one&#8217;s not going anywhere. Ever, probably.<\/strong><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">1. Introduction: Yes, This Is Actually Happening<\/h2>\n\n\n\n<p>The whole concept of &#8220;human in the loop&#8221; has served two purposes \u2014 partly as quality control, and partly as an honest-to-God admission that AI wasn&#8217;t (and probably never will be) good enough to be trusted on its own. Which, for most of the 2010s, was a pretty fair assessment. AI code generation was basically a novelty. A neat magic-trick. Something you&#8217;d demo at a conference and then quietly put it back in the box when you get back home, and not use in production because you know the mechanics of then magic trick; other than the dopes you duped at the conference.<\/p>\n\n\n\n<p>By 2026, though? <a href=\"https:\/\/senorit.de\/en\/blog\/ai-agents-software-development-2026\">Roughly 41% of all code produced globally is AI-generated<\/a>, with <a href=\"https:\/\/senorit.de\/en\/blog\/ai-agents-software-development-2026\">84% of developers reporting daily use of AI coding tools<\/a>. Yes, it&#8217;s laziness primarily because we&#8217;re human\u2026 and therefore not a novelty anymore. That&#8217;s a structural shift, and it demands we actually sit down and ask the terribly uncomfortable question like at what point does the HITL requirement stop being a practical necessity and start being just\u2026 bureaucratic theatre? (so the coders can become even lazier)<\/p>\n\n\n\n<p>And this isn&#8217;t an abstract philosophical question like that about the existence of the Flying Spaghetti Monster, by the way. It has very concrete implications \u2014 for software engineers who&#8217;d perhaps like to know if their job will still exist in a decade (spoiler: nope), for shysters trying to figure out who to sue when AI-generated code burns down a hospital&#8217;s records system including the hospital actually built around it, for cybersecurity people who are already exhausted, and for the regulatory bodies across the OECD that are scrambling to draft governance frameworks fast enough to actually matter, <a href=\"https:\/\/lars-hilse.de\/lhx18\/2026\/02\/why-and-how-to-use-openclaw-and-ai-agents-to-test-secure-your-network-infrastructure\/\">which they won&#8217;t when their quarterly pentest just called them on the golf-course to let them know the breach already happened.<\/a><\/p>\n\n\n\n<p>This paper isn&#8217;t here to tell you that human oversight is useless. It&#8217;s here to argue that what oversight <em>looks like<\/em> is going to transform in ways that most people aren&#8217;t prepared for and that there will come a time in the not so distant future when we&#8217;ll have to let go and let our &#8220;little ones&#8221; move on when they&#8217;re all grow&#8217;d up.<\/p>\n\n\n\n<p>One distinction has to be nailed down right at the start, because conflating these two things leads to complete nonsense conclusions. &#8220;Human in the loop&#8221; covers two structurally different activities:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Code review<\/strong> \u2014 a human looks at AI-generated code <em>before<\/em> it runs and decides whether it&#8217;s acceptable<\/li>\n\n\n\n<li><strong>Execution review<\/strong> \u2014 a human (or system) monitors, traces, and audits what an AI <em>actually does<\/em> at runtime<\/li>\n<\/ul>\n\n\n\n<p>These are not the same thing. Their trajectories are, in fact, going in <em>opposite directions<\/em>. As AI gets better at writing correct code, code review becomes less necessary. But as AI agents get more autonomous and their behaviour more complex, execution review becomes <em>more<\/em> necessary, not less. Conflating the two leads to the (wrong) conclusion that HITL is disappearing wholesale. It isn&#8217;t. One form of it is receding; another is growing to replace it.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">2. The Numbers, Since We&#8217;re Apparently Doing This<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">2.1 Benchmarks: The Steep and Mildly Terrifying Trajectory<\/h3>\n\n\n\n<p>The <a href=\"https:\/\/www.swebench.com\">SWE-bench Verified leaderboard<\/a> is probably the most useful public lighthouse we have for measuring genuine autonomous coding capability. It tests whether or not AI systems can resolve real-world GitHub issues \u2014 multi-file reasoning, context understanding, test validation \u2014 under standardised conditions. Not theoretical toy problems. Real ones.<\/p>\n\n\n\n<p>The numbers went from <a href=\"https:\/\/ctse.aei.org\/the-ai-race-accelerates-key-insights-from-the-2025-ai-index-report\/\"><strong>4.4% in early 2023 to 71.7% by end of 2024<\/strong><\/a>. That&#8217;s a roughly 16-fold improvement in under two years. By mid-2025, the <a href=\"https:\/\/refact.ai\/blog\/2025\/1-agent-on-swe-bench-verified-using-claude-4-sonnet\/\">Refact.ai Agent hit 74.4%<\/a> under strict pass@1 conditions. The <a href=\"https:\/\/epoch.ai\/gradient-updates\/how-well-did-forecasters-predict-2025-ai-progress\">Epoch AI analysis of 2025 forecasting accuracy<\/a> found that AI progress exceeded predictions in most areas \u2014 with SWE-bench being one of the <em>few<\/em> where performance slightly underperformed the most aggressive forecasts. A rare plateau signal, or perhaps just a speed bump. We&#8217;ll come back to that.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">2.2 Anthropic&#8217;s Real-World Data (February 2026) \u2014 And It&#8217;s Kind of Wild<\/h3>\n\n\n\n<p>The most directly relevant dataset here is <a href=\"https:\/\/www.anthropic.com\/research\/measuring-agent-autonomy\">Anthropic&#8217;s February 2026 study <em>&#8220;Measuring AI Agent Autonomy in Practice&#8221;<\/em><\/a>, which dug through millions of real Claude Code sessions. The headline findings:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The <strong>99.9th percentile of autonomous session duration nearly doubled<\/strong> between October 2025 and January 2026 \u2014 from under 25 minutes to over 45 minutes<\/li>\n\n\n\n<li>This growth was <strong>gradual and model-agnostic<\/strong>\u2026 in layman&#8217;s terms it wasn&#8217;t just one new model release causing a jump \u2014 it&#8217;s trust accumulation, deployment maturation, people getting comfortable (so it might be due to a gradual shift in human mindset, too)<\/li>\n\n\n\n<li>Human interventions per session <strong>dropped from 5.4 to 3.3<\/strong> over the same period, <em>while task success rates on the hardest problems doubled<\/em> \u2014 so less oversight produced <em>better results<\/em>. (see where this is going?) Which is, depending on your perspective, either reassuring or deeply unsettling\u2026 take your pick.<\/li>\n\n\n\n<li>Among new Claude Code users, roughly <strong>20% of sessions use full auto-approve mode<\/strong>; among users with 750+ sessions, that&#8217;s <strong>over 40%<\/strong><\/li>\n<\/ul>\n\n\n\n<p>Anthropic&#8217;s own interpretation is worth noting. They argue that <a href=\"https:\/\/www.anthropic.com\/research\/measuring-agent-autonomy\"><em>&#8220;autonomy co-constructed by the model, the user, and the product&#8221;<\/em><\/a> is more appropriate than fixed HITL mandates, and that the AI&#8217;s own uncertainty detection \u2014 pausing to ask clarifying questions \u2014 already outperforms mandated human checkpoints at the same risk thresholds. Critically, <a href=\"https:\/\/gigazine.net\/gsc_news\/en\/20260219-anthropic-claude-ai-agent-report\">the AI asks clarifying questions more than twice as often as humans intervene<\/a> on the most complex tasks. Which somewhat inverts the entire conventional assumption about who initiates oversight.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">2.3 Where Agentic AI Is Actually Being Used<\/h3>\n\n\n\n<p><a href=\"https:\/\/gigazine.net\/gsc_news\/en\/20260219-anthropic-claude-ai-agent-report\">Software engineering currently accounts for approximately 50% of all agentic tool-use actions in production APIs<\/a> \u2014 making it the leading domain by a significant margin. <a href=\"https:\/\/gigazine.net\/gsc_news\/en\/20260219-anthropic-claude-ai-agent-report\">About 80% of current API actions are subject to protective measures like permission restrictions and human approval, while irreversible actions constitute only 0.8% of all actions<\/a>. That sounds reassuring until you notice that a growing subset of frontier deployments involves financial transactions, medical record updates, and security privilege escalation \u2014 domains where that 0.8% irreversibility carries consequences bat-shit disproportionate to its volume.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">2.4 The Rise of Execution Review (The Thing Everyone&#8217;s Ignoring)<\/h3>\n\n\n\n<p>As pre-execution code review retreats, something structurally different has emerged to partially replace it: <em>execution review<\/em>. <a href=\"https:\/\/www.datarobot.com\/blog\/production-ready-agentic-ai-evaluation-monitoring-governance\/\">Execution tracing exposes the sequence of reasoning steps an agent followed, the tools or functions it invoked, and the inputs and outputs at each stage of execution<\/a>, enabling root-cause analysis that static code inspection simply cannot provide. This matters because agentic AI systems are nondeterministic \u2014 the same codebase can produce wildly different execution paths depending on context, live data, and model state. As one framing captures it, <a href=\"https:\/\/www.linkedin.com\/posts\/arbaner_it-just-hit-me-that-execution-traces-in-agentic-activity-7395296661394190336-qf2J\"><em>&#8220;the &#8216;why&#8217; lives inside the trace \u2014 not the codebase&#8221;<\/em><\/a>.<\/p>\n\n\n\n<p>This has real compliance implications. In traditional software, you keep execution logs for 30\u201390 days and can reconstruct decisions from version-controlled source logic. That model collapses with agentic AI for very obvious reasons. When deterministic business logic is intertwined with nondeterministic agent behaviour, execution traces can&#8217;t just be short-term operational logs anymore \u2014 they have to become long-term <strong>accountability artefacts<\/strong>. The evidentiary basis for answering &#8220;why was this loan denied? why was this clinical recommendation made? why was this vulnerability introduced or did it already exist when the code was written?&#8221;<\/p>\n\n\n\n<p>The current state of affairs is, frankly, a bit of a charly-foxtrot. <a href=\"https:\/\/www.gravitee.io\/state-of-ai-agent-security\">As of early 2026, only 47.1% of organisations&#8217; AI agents are actively monitored or secured<\/a> \u2014 meaning more than half of all deployed agents are running without execution-level visibility. Emerging frameworks like the <a href=\"https:\/\/www.arxiv.org\/pdf\/2508.03858v2.pdf\">MI9 runtime governance architecture<\/a> are attempting to close this gap by integrating telemetry capture, authorisation monitoring, conformance checking, drift detection, and containment execution within a unified framework \u2014 but as of this writing, nothing has achieved widespread production adoption and why the hell would it have since we&#8217;re talking about an emerging landscape.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">3. The Four-Phase Model (A Timeline No One Will Stick To, But Here We Are)<\/h2>\n\n\n\n<p>Drawing from all of the above and observable industry trends, here&#8217;s a theorised four-phase model. Each phase describes the concurrent state of both code review <em>and<\/em> execution review, because the whole point is that these two aren&#8217;t the same thing and their trajectories diverge significantly.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Phase I \u2014 The Co-pilot Era (2022\u20132027)<\/h3>\n\n\n\n<p>In this phase, the human is the primary agent. AI is a subordinate tool. The workflow is unambiguous: human formulates intent \u2192 AI generates candidate code \u2192 human reviews and merges. <a href=\"https:\/\/senorit.de\/en\/blog\/ai-agents-software-development-2026\">AI agents in software development as of early 2026 handle boilerplate, test generation, and refactoring autonomously<\/a>, but strategic decisions, architecture, and production deployment remain firmly in human hands.<\/p>\n\n\n\n<p><strong>Code review:<\/strong> Universal. Culturally uncontested. Nobody ships AI-generated code without looking at it first.<\/p>\n\n\n\n<p><strong>Execution review:<\/strong> Nascent. Trace logs exist for debugging purposes but aren&#8217;t treated as governance artefacts. The oversight burden is front-loaded onto code review because the execution monitoring infrastructure isn&#8217;t there yet.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Phase II \u2014 Supervised Autonomy (2027\u20132030)<\/h3>\n\n\n\n<p>As SWE-bench-style performance approaches 90%+ and multi-file reasoning matures, AI systems will handle complete features, isolated bug fixes, and dependency management end-to-end without per-step human review. Human oversight retreats to a checkpoint model: architectural reviews, security audits, production gate approvals. <a href=\"https:\/\/san.com\/cc\/former-google-ceo-predicts-ai-will-replace-most-programmers-in-a-year\/\">Former Google CEO Eric Schmidt predicted in April 2025 that &#8220;the vast majority of programmers will be replaced by AI programmers&#8221;<\/a> within a year \u2014 which is probably overstating the speed, but the <em>direction<\/em> is roughly right (as Google Maps is) for this phase.<\/p>\n\n\n\n<p><strong>Code review:<\/strong> Selective rather than universal \u2014 reserved for security-sensitive modules, architectural boundaries, and production deployments.<\/p>\n\n\n\n<p><strong>Execution review:<\/strong> Formalised. Organisations start retaining execution traces as compliance records. <a href=\"https:\/\/www.nexastack.ai\/blueprints\/agentic-ai-traceability\/\">Audit trail engines capturing reasoning paths, tool calls, escalations, and handoffs<\/a> become standard architectural components.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Phase III \u2014 Exception-Based Oversight (2030\u20132035)<\/h3>\n\n\n\n<p>Human involvement shifts from reviewing code outputs to reviewing <em>decision boundaries<\/em> \u2014 the policies that define what the AI is permitted to execute without authorisation. <a href=\"https:\/\/sdh.global\/blog\/ai-ml\/will-ai-replace-software-engineers-heres-what-the-data-really-shows\/\">Gartner projects that 80% of software engineers will need reskilling for non-coding roles by 2040<\/a> (see? They won&#8217;t be out of a job after all), implying that engineering labour in this phase migrates from code production to AI governance, systems design, and exception handling. This is either a terrifying or exciting prediction, depending on who you&#8217;re sitting\u2026 I, for one, believe they&#8217;ll be able to be retrained.<\/p>\n\n\n\n<p><strong>Code review:<\/strong> Exists only at system architecture level. The pull request as a governance mechanism is largely obsolete for commodity domains.<\/p>\n\n\n\n<p><strong>Execution review:<\/strong> <em>This is<\/em> the primary form of HITL now. Humans review behavioural drift reports, anomaly alerts, and execution summaries. <a href=\"https:\/\/www.datarobot.com\/blog\/production-ready-agentic-ai-evaluation-monitoring-governance\/\">Continuous monitoring with real-time alerting and intervention capabilities when behaviour deviates from expectations<\/a> replaces the pre-merge pull request as the dominant oversight mechanism.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Phase IV \u2014 Governance-Mandated Residual (2035+)<\/h3>\n\n\n\n<p>Technical necessity for human code review in commodity software largely dissolves. The residual HITL persists not because AI can&#8217;t do the job reliably, but because no legal framework assigns liability to a non-human agent. <a href=\"https:\/\/www.linkedin.com\/pulse\/autonomous-ai-coding-where-human-developers-fit-rajni-singh-qes0c\">One February 2026 analysis captures this neatly: <em>&#8220;AI writes the code; humans own the system&#8221;<\/em><\/a> \u2014 a principle that will harden into regulation rather than dissipate.<\/p>\n\n\n\n<p><strong>Code review:<\/strong> Legally mandated only in high-stakes domains \u2014 critical infrastructure, medical, defence, finance. Absent everywhere else.<\/p>\n\n\n\n<p><strong>Execution review:<\/strong> Legally mandated as a universal audit trail requirement, even in domains where no human intervention occurs at the code generation level. Think of it as the technical equivalent of a flight data recorder \u2014 mandatory not because the pilot is incompetent, but because accountability requires evidence.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">4. What&#8217;s Actually Slowing This Down<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">4.1 Benchmarks Hit a Wall at the Edges<\/h3>\n\n\n\n<p>SWE-bench progress is not linear at the margins. <a href=\"https:\/\/epoch.ai\/gradient-updates\/how-well-did-forecasters-predict-2025-ai-progress\">Epoch AI&#8217;s January 2026 forecast accuracy analysis<\/a> identifies the final ~20% of hard, novel, multi-system problems as a genuine capability frontier \u2014 one where performance improvements are diminishing relative to compute investment. These &#8220;last-mile&#8221; problems are disproportionately the most important ones in production: security-critical components, distributed systems logic, race conditions, cross-cutting architectural concerns. An AI that solves 80% of benchmark tasks is not, it turns out, 80% ready for zero-oversight production deployment. Perhaps obvious in retrospect, but worth stating plainly and an argument I am currently making in an analysis about a market-consolidation if\/when the &#8220;AI-bubble&#8221; bursts out of which newer, more performant models will be born (think DeepSeek)<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">4.2 The Attack Surface Is Getting Bigger, Not Smaller<\/h3>\n\n\n\n<p>Autonomous coding agents introduce novel vulnerability classes that flat-out don&#8217;t exist in human-authored code workflows, which is making insurance companies shit the bed as we speak. <a href=\"https:\/\/www.linkedin.com\/pulse\/security-production-ai-agents-2026-iain-harper-dg71e\">Prompt injection appears in over 73% of production AI deployments<\/a>, and <a href=\"https:\/\/www.linkedin.com\/pulse\/security-production-ai-agents-2026-iain-harper-dg71e\">just five malicious documents can manipulate an AI agent 90% of the time via RAG poisoning<\/a>. The <a href=\"https:\/\/www.linkedin.com\/pulse\/security-production-ai-agents-2026-iain-harper-dg71e\">CVE-2025-53773 remote code execution vulnerability in GitHub Copilot<\/a>, assigned a CVSS score of 9.6, exemplifies precisely how AI coding agents operating with fewer human checkpoints can be weaponised through their own tooling interfaces. Reducing HITL without resolving these vulnerabilities doesn&#8217;t produce efficiency. It produces an expanded blast radius if you&#8217;re pessimistic yet we can also turn this stake around and &#8211; at the same time &#8211; make an argument that OpenClaw has taught us about secure communication channels and reduction of other attack surfaces\u2026 so delay that heart attack just a bit longer.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">4.3 OWASP Has Formally Called This Out<\/h3>\n\n\n\n<p><a href=\"https:\/\/www.trydeepteam.com\/docs\/frameworks-owasp-top-10-for-llms\">The OWASP Top 10 for LLM Applications (2025)<\/a> formally codifies <strong>Excessive Agency (LLM06:2025)<\/strong> as a primary risk category: the condition where an LLM is granted too much autonomy, permissions, or functionality, leading to unintended actions beyond their intended scope (which don&#8217;t all have to be bad). <a href=\"https:\/\/www.pointguardai.com\/blog\/understanding-the-owasp-top-10-for-llms\">OWASP&#8217;s recommended mitigations<\/a> include applying the principle of least privilege, requiring human-in-the-loop oversight for high-risk actions, and implementing multi-step verification for automated high-impact decisions. The fact that a leading open-source security framework has codified HITL as a formal <em>security control<\/em> \u2014 not merely a quality heuristic \u2014 signals that its decline is not purely a technical question, but a thing we have to work on continuously\u2026 no, I&#8217;m not saying we&#8217;re going to build the plane while it&#8217;s flying yet improvise along the way; but VERY fast!<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">4.4 Nobody Knows Who&#8217;s Legally Responsible (This is TOTALLY FUN!)<\/h3>\n\n\n\n<p>No jurisdiction currently assigns actionable legal liability to an autonomous AI system for defective software output (beside the EU scrambling like blind men speaking of colour). Until a legal infrastructure for AI-generated code accountability is established \u2014 through product liability reform, software assurance legislation, or AI-specific tort frameworks \u2014 an identifiable human must remain in the loop to serve as the party who gets sued. <a href=\"https:\/\/www.anthropic.com\/research\/measuring-agent-autonomy\">Anthropic itself noted in its February 2026 study<\/a> that policymakers should focus on &#8220;whether humans are in a position to intervene effectively&#8221; rather than mandating specific interaction patterns \u2014 an implicit acknowledgement that the HITL requirement is becoming a governance construct rather than a purely technical one\u2026 hey, someone has got to take one for the team and pick up that bar of soap, right? RIGHT? Well, not really because wearing THIS hat has going to cause significant headaches because the speed at which AI generates code far supersedes human capacity, which in turn will leave only those few with very little bad luck in thinking things through to the end available for such high-stakes positions\u2026 and boom, there&#8217;s the next speed bump.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">4.5 You Can&#8217;t Remove Code Review Until Execution Review Is Actually Ready<\/h3>\n\n\n\n<p>This dependency is underappreciated and it&#8217;s kind of a really big fucking deal. Removing pre-execution human oversight before robust execution monitoring is in place doesn&#8217;t reduce governance burden \u2014 it displaces it <em>invisibly<\/em>. <a href=\"https:\/\/apiiro.com\/blog\/code-execution-risks-agentic-ai\/\">The top code execution risks in agentic AI systems in 2026 include privilege escalation through tool chaining, exfiltration via autonomous API calls, and undetected data poisoning mid-task<\/a> \u2014 none of which are visible in static code review. <a href=\"https:\/\/www.moxo.com\/blog\/agentic-ai-observability\">Moxo&#8217;s February 2026 agentic observability analysis<\/a> states the dependency directly: <em>&#8220;Orchestration fails when humans are removed. It works when they&#8217;re supported&#8221;<\/em> \u2014 supported, crucially, by trace logs that correlate agent actions with human intent across the full workflow.<\/p>\n\n\n\n<p>This creates a hard sequencing constraint: Phase II and Phase III transitions cannot proceed safely until execution tracing infrastructure is standardised, legally recognised, and operationally integrated. Organisations that reduce code review oversight ahead of that infrastructure maturity aren&#8217;t accelerating the transition \u2014 they&#8217;re operating in an ungoverned gap that expands their legal liability rather than reducing it.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">5. Discussion: The Bit That&#8217;s Not Going Away<\/h2>\n\n\n\n<p>The four-phase model converges on a conclusion that is probably less dramatic than the AI discourse usually wants: HITL won&#8217;t disappear. It will <em>metamorphose<\/em>. The trajectory from universal code review to exception-based oversight to governance-mandated accountability mirrors how other engineering disciplines matured. Aircraft are overwhelmingly flown by autopilot systems that technically require no human input for most of the flight (keep in mind how long that took, and what assistive tech was necessary). And yet, two licensed pilots are legally required in the cockpit of every commercial aircraft. Not because the autopilot can&#8217;t do it. But because when it fails, society needs a human face on the failure (which in the aviation industry is kinda ironic because they pilots die, too).<\/p>\n\n\n\n<p>Execution review deepens the analogy quite a bit. A modern commercial aircraft generates continuous flight data and cockpit voice recordings \u2014 the execution trace \u2014 that persist as mandatory legal artefacts regardless of how little the pilots actually did. The FAA doesn&#8217;t mandate those because it thinks pilots are incompetent. It mandates them because accountability requires evidence (even if they also died in the crash). AI-generated code in production is heading toward the same institutional conclusion: the execution trace, not the code diff, will become the primary accountability document.<\/p>\n\n\n\n<p>For high-stakes software domains \u2014 think critical infrastructure, medical device firmware, nuclear facility management, weapons systems, large-scale financial settlement \u2014 a governance-mandated HITL will persist well past 2040, <a href=\"https:\/\/sdh.global\/blog\/ai-ml\/will-ai-replace-software-engineers-heres-what-the-data-really-shows\/\">as projections on AI capability timelines from bodies like Oak Ridge National Laboratory suggest<\/a>, not because AI lacks the capability but because democratic accountability structures require a human face (preferably alive, other than in the civilian aviation industry) on consequential decisions. That&#8217;s not a technical argument. It&#8217;s a political and ethical one, and those tend to be more durable.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">6. Conclusion: HITL Isn&#8217;t Dying, It&#8217;s Moulting to HITL&amp;ER (and eventually disappearing altogether?)<\/h2>\n\n\n\n<p>The &#8220;human in the loop&#8221; requirement in AI-generated code is not approaching a binary off-switch. It&#8217;s undergoing a <em>functional decomposition<\/em>: the <em>code review<\/em> component will become obsolete for most software domains by roughly 2030\u20132032, while the <em>execution review<\/em> component simultaneously matures into mandatory governance infrastructure. Both converge, eventually, on a residual <em>accountability layer<\/em> \u2014 legally and ethically mandated \u2014 that will persist indefinitely in high-stakes domains.<\/p>\n\n\n\n<p>The empirical evidence \u2014 <a href=\"https:\/\/ctse.aei.org\/the-ai-race-accelerates-key-insights-from-the-2025-ai-index-report\/\">SWE-bench&#8217;s 16x improvement in two years<\/a>, <a href=\"https:\/\/www.anthropic.com\/research\/measuring-agent-autonomy\">Anthropic&#8217;s data showing autonomous session durations doubling in three months<\/a>, <a href=\"https:\/\/www.trydeepteam.com\/docs\/frameworks-owasp-top-10-for-llms\">OWASP formally codifying excessive AI agency as a primary security risk<\/a>, and <a href=\"https:\/\/www.gravitee.io\/state-of-ai-agent-security\">only 47.1% of deployed AI agents currently being monitored<\/a> \u2014 collectively defines a transition point somewhere between 2028 and 2032. After that, HITL in software engineering shifts from an operational default to a deliberate policy choice (for better or for worse depending in which jurisdiction you live in).<\/p>\n\n\n\n<p>The organisations that navigate this safely will be those that treat the decline of code review and the rise of execution review not as sequential events but as a single managed substitution. What remains after that point isn&#8217;t a legacy of limitation. It&#8217;s a feature of accountable governance. A distinction worth keeping hold of, probably.<\/p>\n<div class=\"ttr_end\"><\/div>","protected":false},"excerpt":{"rendered":"<p>The Slow, Inevitable Death of &#8220;Someone Needs to Check the AI&#8217;s Homework&#8221; Look, the whole &#8220;human in the loop&#8221; thing in AI-generated code? It&#8217;s dying a gore, horrific death\u2026 only not dramatically, not overnight \u2014 but measurably, and with increasing speed, driven by benchmark data that&#8217;s honestly kind of alarming, real-world deployment numbers, and the &hellip; <a href=\"https:\/\/lars-hilse.de\/lhx18\/2026\/03\/hitler-a-theoretical-framework-for-the-decline-of-human-oversight-in-ai-generated-code\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">HITL&amp;ER &#8211; A Theoretical Framework for the Decline of Human Oversight in AI-Generated Code<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":771,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[546],"tags":[539,533,534,538,543,545,536,537,532,542,531,535,540,541,544],"class_list":{"0":"post-770","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","6":"hentry","7":"category-ai","8":"tag-agentic-ai-observability","9":"tag-agentic-ai-oversight","10":"tag-ai-agent-autonomy","11":"tag-ai-agent-execution-tracing-compliance","12":"tag-ai-autonomy-legal-accountability","13":"tag-ai-coding-tools-developer-trust","14":"tag-ai-generated-code-without-human-review","15":"tag-autonomous-coding-agent-oversight-2026","16":"tag-execution-review-ai","17":"tag-future-of-code-review-ai","18":"tag-hitl-ai-governance","19":"tag-human-in-the-loop-ai","20":"tag-owasp-llm-excessive-agency","21":"tag-who-is-liable-for-ai-generated-code","22":"tag-will-ai-replace-software-engineers","24":"fallback-thumbnail"},"jetpack_featured_media_url":"https:\/\/i0.wp.com\/lars-hilse.de\/lhx18\/wp-content\/uploads\/2026\/03\/HITLER--A-Theoretical-Framework-for-the-Decline-of-Human-Oversight-in-AIGenerated-Code.png?fit=960%2C640&ssl=1","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/paluiP-cq","jetpack_likes_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/lars-hilse.de\/lhx18\/wp-json\/wp\/v2\/posts\/770","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lars-hilse.de\/lhx18\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lars-hilse.de\/lhx18\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lars-hilse.de\/lhx18\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/lars-hilse.de\/lhx18\/wp-json\/wp\/v2\/comments?post=770"}],"version-history":[{"count":1,"href":"https:\/\/lars-hilse.de\/lhx18\/wp-json\/wp\/v2\/posts\/770\/revisions"}],"predecessor-version":[{"id":772,"href":"https:\/\/lars-hilse.de\/lhx18\/wp-json\/wp\/v2\/posts\/770\/revisions\/772"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/lars-hilse.de\/lhx18\/wp-json\/wp\/v2\/media\/771"}],"wp:attachment":[{"href":"https:\/\/lars-hilse.de\/lhx18\/wp-json\/wp\/v2\/media?parent=770"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lars-hilse.de\/lhx18\/wp-json\/wp\/v2\/categories?post=770"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lars-hilse.de\/lhx18\/wp-json\/wp\/v2\/tags?post=770"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}