Tech Digest – February 3, 2026

Capital, Supply Chains & Government Spending

SpaceX Acquires xAI in Largest Merger Ever — Combined Valuation Hits $1.25 Trillion

SpaceX has acquired Elon Musk’s AI company xAI in an all-stock deal that values the combined entity at $1.25 trillion, making it the world’s most valuable private company and the largest merger on record. SpaceX was valued at roughly $1 trillion, xAI at $250 billion. The stated rationale: building “orbital data centers” to overcome terrestrial energy and cooling constraints on AI compute. SpaceX has asked the FCC for authorization to launch up to one million satellites in support of that plan. An IPO is expected later this year.

Note: The largest AI infrastructure play is now vertically integrated with launch capability, satellite networks, and energy ventures. When one entity controls compute, connectivity, and orbital access, the competitive landscape for AI infrastructure shifts in ways no existing procurement framework anticipated.

Sources: SpaceX, Bloomberg, CNBC

Apple Now Paying $57 More Per iPhone for Memory as AI Demand Reprices Components

Apple is reportedly paying $57 more per iPhone for memory components as AI companies outbid consumer electronics manufacturers for glass fiber and chips. The Wall Street Journal reports this cost pressure is coming directly from AI infrastructure buildouts consuming supply that previously served the consumer hardware market.

Note: If AI demand is repricing components for the world’s largest consumer electronics company, every institution planning hardware refreshes or digital infrastructure upgrades should expect the same pressure on lead times and pricing.

Sources: Wall Street Journal

White House Launches $12 Billion Critical Minerals Stockpile

The White House announced Project Vault, a $12 billion critical minerals stockpile backed by a record Export-Import Bank loan. The initiative aims to insulate US manufacturers from Chinese leverage over materials essential to chip fabrication, battery production, and defense manufacturing.

Note: Mineral stockpiling is industrial policy catching up with supply chain reality. EU institutions tracking strategic autonomy should note: the US is now spending at scale on inputs that European manufacturers also depend on.

Sources: Bloomberg

Palantir Revenue Surges 70% — US Government AI Spending Accelerates

Palantir reported Q4 2025 revenue of $1.41 billion, up 70% year-over-year — its highest growth rate as a public company. US government revenue grew 66%, US commercial revenue surged 137%. The company guided 61% revenue growth for full-year 2026, well above analyst expectations. CEO Alex Karp called the results “indisputably the best results that I’m aware of in tech in the last decade.”

Note: Government AI procurement is no longer experimental — it’s a growth engine. The gap between institutions buying AI capabilities and institutions still debating them is widening quarter by quarter.

Sources: CNBC, Palantir IR

AI Agents Ship to Production

OpenAI’s Codex Now Builds Itself — and Its CEO Admits Feeling “a Little Useless”

OpenAI launched a Codex app for macOS designed as a command center for managing AI coding agents. But the tool is already outrunning its creators: an OpenAI Codex engineering manager stated that “Codex now pretty much builds itself,” with humans serving primarily as supervisors of the output. CEO Sam Altman described asking Codex for feature ideas and finding several better than his own, saying he “felt a little useless and it was sad.”

Note: When the CEO of the company building the tool admits it made him feel redundant, the workforce conversation moves from theoretical to personal. The recursive loop — AI improving AI with human oversight — is no longer a research paper. It’s a shipping product.

Sources: OpenAI, Tibo (OpenAI) on X, Sam Altman on X

Google AI Agent Finds and Patches Security Vulnerability in Hours

A Google product engineer demonstrated a code security agent, built on Google’s Gemini CLI, that autonomously identified a critical vulnerability in OpenClaw, generated a proof-of-concept exploit, opened a pull request with the fix, and had it merged — all within hours. The entire cycle from discovery to patch required no human coding.

Note: Automated vulnerability discovery and patching compresses a process that typically takes weeks into hours. For any institution running custom or open-source software, this changes the security calculus — both for defense and for what adversaries can now do at the same speed.

Sources: Evan Otero (Google) on X

Scientific Automation

DeepMind Solves 13 Open Erdős Mathematics Problems with Gemini

Google DeepMind used its Gemini model to solve 13 previously open problems posed by mathematician Paul Erdős — problems that had resisted human mathematicians for decades. The results, published on arXiv, demonstrate AI capability in formal mathematical reasoning at a level beyond current human performance in these specific domains.

Note: Erdős problems aren’t textbook exercises — they’re the kind of open questions that define mathematical careers. AI solving them in bulk is a quiet signal that automated reasoning is reaching domains previously considered irreducibly human.

Sources: arXiv

Claude Enters the Wet Lab — and Reads Genomes

Anthropic announced a partnership with the Allen Institute and Howard Hughes Medical Institute to position Claude at the center of biological experimentation workflows. Separately, a Goodfire AI researcher uploaded his full genome sequence to Claude and had it generate a photorealistic image of his appearance from the raw genetic data alone — demonstrating that genomic information now contains enough signal for AI to reconstruct physical identity.

Note: Two stories, one trajectory. AI is moving from analyzing biological data to directing biological experiments and inferring physical identity from genetic code. Anyone handling genomic or biometric data just got a much more concrete reason to think carefully about what “identifiable” means.

Sources: Anthropic, Mark Bissell on X

Intelligence Benchmarks & Cost Dynamics

Nature: “The Evidence Is Clear” — AI Displays Human-Level Intelligence

Four researchers published a Comment in the journal Nature arguing that “the current evidence is clear” — AI now displays human-level general intelligence. The authors — spanning philosophy, computer science, cognitive science, and data science at UC San Diego — examined ten common objections and found each either conflates general intelligence with traits specific to biological humans, or applies standards that individual humans themselves fail to meet. Their conclusion: “Machines such as those envisioned by Turing have arrived.”

Note: Nature doesn’t publish claims like this lightly. For institutional leaders still framing AI as “a tool that assists,” one of the world’s most conservative scientific journals just moved the goalposts.

Sources: Nature

Anthropic: As AI Scales, Failures Look Like Industrial Accidents — Not Evil Plots

Anthropic’s alignment research team published findings indicating that as AI models grow more capable, their failure modes are increasingly dominated by incoherence rather than intentional misalignment. The researchers characterized the pattern as resembling industrial accidents more than adversarial behavior — messy, unpredictable breakdowns rather than calculated deception.

Note: This reframes institutional AI risk entirely. The danger isn’t rogue AI with hidden agendas — it’s complex systems failing in messy, unpredictable ways. That’s a risk profile institutions already understand from critical infrastructure, and it calls for engineering safeguards, not science fiction precautions.

Sources: Anthropic Alignment

GPT-2 Grade Model Now Trainable for $73 in Three Hours

AI researcher Andrej Karpathy demonstrated training a GPT-2 grade language model for approximately $73 in three hours on a single 8×H100 compute node. GPT-2, released by OpenAI in 2019, was considered state-of-the-art at the time and deemed too dangerous to release in full.

Note: The model that was “too dangerous to release” six years ago now costs less than a team lunch to reproduce. This deflation curve is the background radiation of every technology decision being made today.

Sources: Andrej Karpathy on X

Workforce Signals

Corporations Using “A.I. Washing” to Disguise Unrelated Layoffs

The New York Times and Forrester report a growing pattern of corporations attributing workforce reductions to artificial intelligence when the actual drivers are financial — cost-cutting, restructuring, or declining demand. Forrester’s research found that many companies announcing AI-related layoffs “do not have mature, vetted AI applications ready to fill those roles.” The firm predicts over half of these AI-attributed layoffs will be quietly reversed as companies realize the operational cost of premature automation.

Note: AI displacement is real, and so is the incentive to hide behind it. For anyone planning workforce strategies, the challenge is distinguishing genuine automation impact from corporate narrative management. Forrester’s prediction — that most of these cuts get reversed — suggests the hype is running well ahead of the capability in most organizations.

Sources: New York Times, Forrester

Similar Posts