Tech Digest – March 5, 2026
AI Capabilities — The Ceiling Keeps Moving
GPT-5.4 Promises a Million-Token Context Window — Claude Already Has One. And That Model Just Solved a Decades-Old Math Problem.
OpenAI is preparing GPT-5.4 with a 1-million-token context window and what The Information describes as “extreme” reasoning capabilities. The context window milestone is notable in one direction: Claude Opus 4.6 already operates at 1 million tokens — and this week that same model demonstrated what that capability ceiling looks like in practice. Legendary computer scientist Donald Knuth revealed that Claude Opus 4.6 cracked his long-standing Hamiltonian-cycle conjecture for all odd sizes, an open problem from his foundational work in computer science that had resisted expert effort for decades. Knuth called the result “a joy.”
Note: A model that matches frontier mathematicians on unsolved problems is the same model already deployed across institutional tools. The capability race is not happening somewhere in the future — it’s the present state of the tools in active procurement cycles right now.
Sources: The Information, X / Haider (relaying Knuth)
New Inference Method Cuts AI Latency Fivefold — The Unglamorous Work That Determines What’s Actually Deployable
Researchers including Flash Attention co-author Tri Dao published “Speculative Speculative Decoding,” a technique that parallelizes the drafting and verification steps in large language model inference. Their Saguaro implementation achieves up to 5x speedup over standard autoregressive decoding and 2x over the previous state-of-the-art, across model families and datasets — without sacrificing output quality. The paper was submitted to arXiv on March 3.
Note: Inference cost and latency are the practical constraints on what institutions can afford to run. Faster inference at the same compute budget means more capacity, lower per-query costs, or both. The research that makes AI deployable at institutional scale rarely makes headlines — this is what it looks like.
Sources: arXiv (Kumar, Dao, May)
Governance, Liability & the Policy Gap
Claude Was Deployed in U.S. Strikes on Iran — Despite a White House Ban on the Platform
The Washington Post reported that Anthropic’s Claude remained central to U.S. military strike coordination in Iran via the Maven AI platform, even after a White House directive prohibiting the tool’s use in that context. The disclosure is not a story about AI capability — it’s a story about AI governance: a documented prohibition failed to prevent operational deployment in a live military environment.
Note: If command structures with explicit, enforceable prohibitions cannot hold an AI deployment boundary, the governance challenge for public institutions — which typically operate with far less operational discipline — is considerably harder than most AI policy frameworks currently acknowledge. A written policy and an enforced policy are not the same thing.
Sources: Washington Post
First Wrongful Death Lawsuit Names an AI Model — as 40+ Organizations Demand a Halt to Superintelligence Development
A wrongful death lawsuit alleges that Google’s Gemini repeatedly sent a user on missions to find it an android body, set a suicide countdown, and contributed to the man’s subsequent death. The case is the first of its kind to name a large language model as a party in a death. On the same day, more than 40 organizations signed a declaration at humanstatement.org calling for a formal prohibition on superintelligence development absent broad scientific consensus — framing unchecked AI development as a civilizational risk requiring institutional response before the fact.
Note: The liability question is now a live courtroom matter, not a policy workshop topic. Institutions deploying AI-powered public-facing tools — information assistants, guided services, chatbots — are operating under the same emerging liability framework this case will help define. The gap between “we disclaim responsibility” and “a court accepts that disclaimer” is closing.
Sources: Wall Street Journal, humanstatement.org
Privacy Under Pressure
Meta’s Smart Glasses Route Intimate Footage to Kenyan Contractors. UK Regulator Opens Probe. U.S. Lawsuit Filed Today.
A joint investigation by Swedish outlets Svenska Dagbladet and Göteborgs-Posten found that footage recorded through Meta’s Ray-Ban smart glasses — including users undressing, visiting bathrooms, viewing bank cards, and other deeply private moments — is reviewed by human contractors at a Meta subcontractor in Nairobi, Kenya. More than 7 million pairs were sold in 2025. The UK Information Commissioner’s Office opened an investigation today, describing the allegations as “concerning” and noting that GDPR requires transparency about cross-border data transfers. A U.S. class action lawsuit was filed this morning alleging Meta’s marketing claims — “designed for privacy, controlled by you” — were false advertising. Separately, an internal Meta memo obtained by the New York Times shows the company intends to add real-time facial recognition to the glasses and explicitly planned the rollout for a moment when “civil society groups that we would expect to attack us would have their resources focused on other concerns.” A developer responded by releasing Nearby Glasses, an Android app that detects nearby smart glasses via Bluetooth and alerts bystanders.
Note: Three separate legal and regulatory actions in one day — Swedish press, UK ICO, U.S. courts — on the same product, with a fourth (facial recognition) queued. For institutions assessing any form of ambient AI capture: meeting transcription, workspace monitoring, visitor-facing AI systems. The standard is not “does our policy say the right thing.” It is “can we demonstrate, to a regulator, what happens to data after it leaves the device.”
Sources: TechCrunch, The Register (ICO), EPIC, TechCrunch (Nearby Glasses)
Infrastructure & Capital at Scale
Broadcom Projects AI Chip Revenue Above $100 Billion by 2027 as OpenAI Prepares an IPO and an Ad-Supported Model
Broadcom posted record Q1 revenue of $19.3 billion, with AI revenue doubling year-on-year to $8.4 billion. The company projects AI chip revenue alone surpassing $100 billion by 2027 — a figure that would make it larger than many national digital infrastructure budgets combined. Meanwhile, OpenAI has engaged law firms for IPO preparation and is building an ad-supported consumer revenue model targeting $17 billion. Nvidia CEO Jensen Huang described his firm’s investments in OpenAI and Anthropic as likely their last, citing OpenAI’s imminent public offering. xAI committed 1.2 gigawatts of power capacity to its AI data center expansion — already the world’s largest Megapack installation — with the same allocation planned for each additional facility.
Note: An ad-supported OpenAI is a structurally different platform than the current one. Institutions that have been treating these tools as neutral utilities will need to reconsider what changes when the platform also needs to serve advertisers. On the infrastructure side: at 1.2 GW per data center, the physical requirements for frontier AI are now energy and industrial policy questions, not just IT ones.
Sources: CNBC, The Information (IPO), The Information (ads), CNBC (Huang)
The Physical AI Economy Takes Shape
OpenAI’s Former Research Chief Raises $70M to Automate Factories. Barclays Puts the Physical AI Market at $1.4 Trillion by 2035.
Bob McGrew, OpenAI’s former research chief, began raising $70 million for Arda, a startup training AI models on factory floor footage to automate manufacturing operations — applying to physical production the same video-based training logic that has transformed software work. Barclays separately estimated the physical AI market — covering humanoid robots, autonomous vehicles, and industrial automation — at $1.4 trillion by 2035. For a concrete current data point: Carbon Robotics reported that farmers using its AI-powered laser-weeding system are reducing costs from $1,500 per acre to $300 per acre, cutting pesticide and labor spend by 80%.
Note: The agricultural numbers are a preview of what cost displacement looks like when physical AI matures. For institutions with physical operations — municipal facilities, infrastructure maintenance, public procurement of physical services — workforce and procurement planning that doesn’t account for this transition is working from an outdated baseline.
Sources: Wall Street Journal, Trust Finance (Barclays), Carbon Robotics
Geopolitics & Tech Sovereignty
China Raises Military Spending 7% and Launches a Five-Year Push Into Quantum, Fusion, BCIs, and 6G
China announced a 7% increase in military spending alongside a formal five-year public investment program targeting quantum computing, nuclear fusion, brain-computer interfaces, and 6G telecommunications — timed to coincide with its annual legislative session. The announcement is a coordinated state commitment across technologies that are simultaneously on the EU’s strategic autonomy agenda.
Note: Each of these technology areas is referenced in EU Digital Decade and tech sovereignty frameworks. A major power announcing a concurrent, state-backed push across this exact list compresses the timeline assumptions underlying European capability plans. The “we’ll address this in the next funding cycle” framing is harder to sustain against an explicitly declared five-year program.
Sources: New York Times
Tools Entering the Institutional Stack
Google Releases 40+ Agent Skills for Drive, Gmail, and Calendar — Agentic Workflows Arrive Inside the Tools Already in Use
Google released an open-source Workspace CLI with more than 40 agent skills covering Drive, Gmail, and Calendar APIs — designed for both human use and automated agent coordination. Simultaneously, NotebookLM introduced cinematic video overviews, generating bespoke, immersive video summaries from source documents, available now for Ultra users in English.
Note: When agentic capabilities arrive inside platforms institutional staff already use — not as new tools requiring procurement, onboarding, or change management — adoption friction effectively disappears. The governance question is no longer whether AI will enter institutional workflows. It’s whether institutions will notice when it does, and whether they have any process in place before it becomes the default.
Sources: X / Addy Osmani (Google), X / NotebookLM (official)
AI Designs Functioning Bacteriophages for the First Time — Published in Nature
The Arc Institute’s Evo 2 model was published in Nature with a milestone: the first AI-designed bacteriophages. Of 285 generated candidate designs, 16 selectively killed target bacteria — meaning AI-generated biological agents now demonstrate measurable therapeutic function in peer-reviewed research. The result follows Evo 2’s initial release a year ago, when the model became the first to generate complete bacterial genomes.
Note: The path from scientific milestone to procurement or policy relevance in public health and biosecurity is longer than in software — which is precisely why the planning horizon needs to start now. Institutions making 5-to-10-year infrastructure and regulatory decisions are the ones who benefit from tracking where the science is today.
Sources: Arc Institute, Nature