Tech Digest – April 9, 2026

Cybersecurity Crosses the Superhuman Line

Anthropic’s Mythos Finds Thousands of Zero-Days — and OpenAI Is Preparing Its Own Cyber Model

Anthropic released a preview of Claude Mythos, a frontier model that reproduced and exploited software vulnerabilities on the first attempt in 83.1% of cases. The model has found thousands of high-severity flaws across every major operating system and browser. Anthropic will not release it publicly — instead, it launched Project Glasswing, a defensive coalition of roughly 40 partners including Microsoft, Apple, Amazon, CrowdStrike, and the Linux Foundation, backed by $100M in usage credits and $4M in direct donations to open-source security organisations.

Mythos is also the first model class trained at scale on Nvidia Blackwell GPUs, with Vera Rubin architecture next — a generational compute handoff happening while pre-training still has headroom and reinforcement learning is paying off. OpenAI is reportedly finalising its own cyber-capable model for staggered rollout to select partners through its “Trusted Access for Cyber” programme.

Note: The 83.1% first-attempt exploit rate inverts the economics of software security. Patching has always been slower than finding — now finding is automated and faster than patching. Every institution running custom or legacy software just acquired a deadline it didn’t set.

Sources: Anthropic, TechCrunch, Axios, The Hacker News

The Frontier Race

xAI’s New President Admits the Lab Is “Clearly Behind” — While Running Seven Models Simultaneously

SpaceX executive Michael Nicolls, who leads Starlink, has taken over as xAI president and told staff the company is “clearly behind” rival frontier labs. The compute team’s training performance is “embarrassingly low,” according to an internal memo viewed by Business Insider. The reorganisation parachutes SpaceX engineering leadership into xAI ahead of SpaceX’s planned IPO, expected to value the combined entity at over $2 trillion.

The timing is striking. Elon Musk says Colossus 2 has seven models in training simultaneously — from Imagine V2 through twin 1T and 1.5T variants up to a 10T behemoth — each pre-training run lasting roughly two months. Nine of eleven original xAI co-founders have departed.

Note: Seven simultaneous training runs and an “embarrassingly low” efficiency rating in the same memo. Scale without execution is just an electricity bill.

Sources: Business Insider, Elon Musk / X

OpenAI Researchers Solve Five More Erdős Conjectures

A team including OpenAI researcher Mark Sellke published solutions to five previously open Erdős problems spanning combinatorics, probability, and number theory. Since October 2025, approximately 100 Erdős problems have moved into the “solved” column with AI assistance — though Fields Medalist Terence Tao notes the models function as advanced research assistants rather than autonomous mathematicians. The open conjectures of the 20th century are becoming closed tickets in the 21st.

Sources: arXiv, Scientific American

The Compute Squeeze

Inference Demand Grows 10× Per Year While Chinese Labs Train on 10× Less Compute

Cognition CEO Scott Wu estimates global GPU FLOPs are growing approximately 3× annually, while inference demand is growing roughly 10×. The scissor between supply and demand forecasts rising compute prices and a structural flight toward smaller, more efficient models. Separately, Epoch AI calculates that Chinese and open-source labs are training on roughly one-tenth the compute of frontier Western labs — combined US AI capex exceeded $350 billion in 2025 versus under $40 billion from China’s major cloud providers.

Note: The 10× demand-supply gap and the 10× East-West compute gap are the same number with very different consequences. One drives prices up. The other drives architectural creativity. Institutions planning AI procurement should expect both effects — higher costs and surprisingly capable alternatives from constrained environments.

Sources: Scott Wu / X, Epoch AI

TSMC’s Advanced Packaging Grows 80% Annually as Meta Commits Another $21 Billion to CoreWeave

TSMC’s CoWoS advanced chip packaging — the critical bottleneck for AI accelerators — is compounding at 80% annually, with Nvidia securing the majority of capacity through 2027. The company is scaling from 35,000 wafers per month in late 2024 to a projected 130,000 by the end of 2026, while also building two new packaging facilities in Arizona.

Meta committed an additional $21 billion to CoreWeave for AI cloud capacity through December 2032, on top of a prior $14.2 billion agreement. The expanded deal focuses on inference rather than training, and will include some of the first deployments of Nvidia’s Vera Rubin platform across multiple data centre locations.

Note: $35.2 billion from a single customer to a single cloud provider, running through 2032. These are not annual budgets — they’re infrastructure commitments with multi-year lock-ins. The supply chain for AI compute is being pre-purchased at a pace that narrows options for everyone else.

Sources: CNBC, CNBC, CoreWeave

Energy & Infrastructure

OpenAI Pauses UK Stargate Buildout Over Energy Costs and Regulation

OpenAI paused its Stargate data centre project in the UK, citing energy costs and the broader regulatory environment. The site at Cobalt Park, Tyneside, was planned to house roughly 8,000 Nvidia processors in partnership with Nscale. UK industrial electricity prices are among the highest globally. OpenAI said it would proceed once conditions supported “sustained, long-term investment,” but the pause comes as the company reins in spending ahead of its anticipated public listing.

Note: The UK offered fast planning approvals and a willing energy secretary. OpenAI still walked. For any European government competing for AI infrastructure investment, the message is blunt: grid capacity and energy pricing are now site-selection criteria that outrank regulation.

Sources: Bloomberg, IT Pro

Germany Builds the World’s Tallest Wind Turbine — 364 Metres, Inside a Coal Mine

Dresden-based engineering firm GICON and partner Beventum have begun construction of a 364-metre wind turbine in Schipkau, Brandenburg, on the site of a former lignite mine. The hub sits at 300 metres, accessing low-level jet streams previously reachable only by offshore installations. Expected annual output is 30–33 GWh — 220% more than conventional turbines nearby — at under five cents per kilowatt-hour. If connected to the grid by late 2026 as planned, it will be the tallest wind energy structure ever built.

GICON aims to install 1,000 similar turbines across Germany by 2030, focusing on former open-cast coal mines — starting with Bavaria.

Note: Not a prototype — a business model. Repurposing industrial brownfield sites to reach wind speeds the industry had written off as offshore-only. If the economics hold at scale, Germany’s coal legacy becomes its renewable infrastructure advantage.

Sources: Euronews, Clean Energy Wire

Capital at Escape Velocity

UBS Model Values Nvidia at $22 Trillion — OpenAI Confirms Retail IPO Access

UBS’s HOLT framework — a conservative, cash-flow-based valuation tool — now places Nvidia’s fair value at $22 trillion, roughly half the entire S&P 500. The calculation rests on a 73% cash-flow return on investment, versus 6% for the average non-financial company — placing Nvidia in the top 0.1% of all companies ever measured in the HOLT database. Investor reaction tends toward disbelief, but the output is a function of the underlying data, not a discretionary call.

Separately, OpenAI CFO Sarah Friar confirmed retail investors will “for sure” receive shares in the company’s IPO, following strong individual demand during its latest funding round. Capital markets are attempting to buy in while the Singularity is still priced in dollars.

Note: When an old-school cash-flow model built for steady-state businesses outputs “half the S&P 500 in one company,” either the model is broken or the input is unprecedented. Nvidia’s CFROI says it’s the latter. Public markets are about to price two of the three leading frontier AI providers — Nvidia and OpenAI — in real time, side by side.

Sources: The Information, CNBC


Today’s developments converge on a single tension: AI capability is accelerating while the infrastructure to support it — chips, energy, data centres, financial frameworks — strains to keep pace. Mythos rewrites cybersecurity assumptions overnight. Meta locks in $35 billion of compute through 2032. OpenAI pauses a UK data centre because the grid can’t support it. And Germany begins building the future on top of the buried past. The institutions that will navigate this well are the ones treating infrastructure — energy, compute, workforce, governance — as the binding constraint, not the AI itself.

Similar Posts