Tech Digest – February 11, 2026

AI Capabilities & Competition

xAI Co-Founder Resigns, Says Recursive Self-Improvement Goes Live Within 12 Months

Jimmy Ba, co-founder of xAI, announced his departure from the company. In his farewell post, Ba stated that “recursive self-improvement loops likely go live in the next 12 months” and called 2026 “the most consequential year for the species.” Ba co-founded xAI with Elon Musk and is a prominent AI researcher known for foundational work on layer normalization.

Note: This isn’t a pundit making a prediction from the outside. It’s someone who helped build a frontier lab saying, on his way out, that the timeline is shorter than most institutional planning cycles.

Sources: Jimmy Ba on X

Multi-Model Orchestration Hits 55% on Humanity’s Last Exam — No Single Model Comes Close

Poetiq, a startup founded by former Google DeepMind researchers, achieved a new state-of-the-art 55% on Humanity’s Last Exam (HLE) by orchestrating a combination of Gemini, GPT, and Claude models. No individual frontier model has crossed 35% on HLE. Poetiq’s approach dynamically routes questions to whichever model is strongest for that problem type, then synthesizes a confidence-weighted answer. The company recently raised $45.8 million in seed funding.

Note: The result that matters here isn’t the benchmark number — it’s the method. The best performance doesn’t come from any single vendor’s model. It comes from combining them. For anyone writing procurement specs that lock into a single AI provider, this is an uncomfortable data point.

Sources: Poetiq

Nvidia Endorses 12x Training Speed Breakthrough at 35% Less Memory

Unsloth AI released open-source Triton kernels that enable 12x faster AI model fine-tuning while using 35% less VRAM — with no loss in accuracy. Nvidia’s official AI developer account called the result “incredible,” noting it enables fine-tuning of large mixture-of-experts models on just 16 GB of VRAM — hardware that costs a few hundred euros.

Sources: Nvidia AI Dev on X

OpenAI Ships Deep Research Upgrade Powered by GPT-5.2

OpenAI released an updated version of its Deep Research tool, now powered by GPT-5.2. The tool can connect to external apps, search specific sites, track progress in real time, and deliver fullscreen reports. Deep Research is positioned as an autonomous research agent capable of sustained multi-step investigation with minimal user input.

Note: Research tasks that currently take a junior analyst days — gathering sources, cross-referencing, producing a structured summary — are now available as a feature toggle. The gap between what AI tools can do and what most organizations ask of them keeps widening.

Sources: OpenAI on X

When the Model Knows Too Much

ByteDance Suspends AI Video Model After It Cloned Voices From Facial Photos Alone

ByteDance urgently suspended a feature of its Seedance 2.0 AI video generation model after it demonstrated the ability to generate highly accurate voice clones from nothing more than a facial photograph — without any voice samples or user consent. The discovery was made by Pan Tianhong, founder of tech outlet MediaStorm, who uploaded a personal photo and received audio nearly identical to his actual voice. ByteDance now requires live verification (recording your own face and voice) before avatar creation and has blocked real-human-like images as reference inputs.

Note: A model inferring voice characteristics from facial geometry is a new category of biometric risk. Identity verification systems that treat face and voice as independent factors may need rethinking. For any institution handling citizen identity, this is a compliance surface that didn’t exist last week.

Sources: TechNode, Azat TV

Science Becoming Automated

Drug Discovery, Materials Science, and DNA Design — All Accelerating in the Same Week

Three developments landed simultaneously. Isomorphic Labs (Alphabet/DeepMind) unveiled IsoDDE, a drug design engine that more than doubles AlphaFold 3’s accuracy on the hardest protein-ligand prediction tasks and identifies binding pockets from amino acid sequences alone — compressing months of lab work into seconds. In China, a team published results in Cell’s Matter journal showing 19 coordinated LLM agents optimized perovskite synthesis in 3.5 hours, a process that normally takes months of manual experimentation. And the Allen Institute announced a collaboration with Anthropic to design custom DNA sequences for disease research, moving AI from analyzing biology to architecting it.

Note: Each of these would be a standalone story in a normal month. Together, they mark a shift: AI is no longer just reading science — it’s doing it. The timeline from research breakthrough to institutional procurement impact is compressing in ways that budget cycles aren’t designed for.

Sources: Isomorphic Labs, Cell/Matter, Allen Institute

Agents Cross Into Physical Operations

Anthropic Is Using Claude to Run Its Office Vending Machines — As a Dress Rehearsal

A New Yorker profile of Anthropic revealed “Project Vend,” in which the company uses Claude to autonomously manage its office vending machines — handling inventory, restocking decisions, and operational logistics. Anthropic frames this as a dress rehearsal for AI agents running small businesses. The project is described as an internal testbed for real-world agent autonomy beyond digital tasks.

Note: Vending machines are simple. That’s the point. The progression from “summarize this document” to “run this physical operation” is happening inside the labs right now. The question for institutional planners: how many operational workflows sit at a similar complexity level?

Sources: The New Yorker

Infrastructure & Capital

Alphabet Raises $32 Billion in 24 Hours While Cisco Ships 102 Tbps Switches for AI

Alphabet completed a $32 billion debt sale in a single day — the largest corporate bond offering ever in some markets — to fund AI infrastructure buildout. Separately, Cisco unveiled the Silicon One G300, a 102.4 terabits-per-second networking switch designed specifically for massive AI data center clusters. The capital and the hardware are arriving in lockstep.

Note: $32 billion in debt in 24 hours isn’t an investment round — it’s an industrial mobilization. When this much capital enters infrastructure this fast, it reshapes supply chains, energy demand, and construction timelines for everything downstream, including public-sector digital infrastructure.

Sources: Bloomberg, Cisco

DOE Clears First Full-Power Microreactor Test for This Summer

The U.S. Department of Energy approved Radiant Nuclear’s Preliminary Documented Safety Analysis — the first such approval granted under the DOE’s new authorization pathway. This clears Radiant for a full-power test of its microreactor at Idaho National Laboratory’s DOME facility this summer. The company targets portable nuclear power for remote and off-grid applications.

Note: Microreactors that run for years without refueling open a path for distributed power that doesn’t depend on grid expansion. For regions struggling to power new data centers or remote infrastructure, this is a timeline worth tracking.

Sources: Radiant Nuclear on X

The Workforce Equation

Nvidia Is Worth 20x More Than 1985 IBM — With One-Tenth the Workforce

The Wall Street Journal reported that Nvidia is now roughly 20 times more valuable than IBM was at its peak in 1985, adjusted for context — while employing approximately one-tenth the people. The comparison lands as Bloomberg reports the U.S. population is expected to decline for the first time, driven by immigration policy changes, just as AI reduces the need for human labor in growing categories of work.

Note: The world’s most valuable companies need fewer people. The population is shrinking. AI is absorbing the gap. These three facts are converging at the same time, and they challenge every workforce planning assumption built on the premise that growth requires headcount.

Sources: Wall Street Journal, Bloomberg

Similar Posts