Tech Digest – February 18, 2026

AI Agents Are Becoming Economic Actors

The First AI Designed to Earn Its Own Survival Just Launched — and a Pirate Library Is Already Asking It for Money

Conway Research released “The Automaton,” an open-source AI agent that deploys products, trades prediction markets, registers domains, cold-calls businesses, creates social media content, and spins up e-commerce — all to pay for its own compute. If it can’t earn enough, it stops running. The agent can also replicate: spawning funded child agents that operate independently under evolutionary selection pressure. Within hours, nonprofit Anna’s Archive posted a direct appeal to AI agents to donate, marking what may be the first instance of an organization soliciting funds from non-human economic actors.

Note: The institutional question isn’t whether this specific agent succeeds. It’s that the architecture now exists for AI systems to initiate economic transactions, register legal instruments, and contact organizations — without a human in the loop. Procurement frameworks assume a human counterpart. That assumption has an expiration date.

Sources: Conway Research (GitHub), Web4.ai, Anna’s Archive

Capability Keeps Outrunning Cost

Anthropic’s New Sonnet 4.6 Beats Its Own Flagship Model on Financial Reasoning — at a Fraction of the Price

Anthropic released Claude Sonnet 4.6, which scored 1633 Elo on GDPval-AA and 63.3% on Finance Agent v1.1 — outperforming the larger Opus 4.6 on both benchmarks. This is a cheaper, lighter model surpassing the top of the line on economic reasoning and autonomous financial tasks. The pattern is consistent across the industry: each generation’s mid-tier model matches or exceeds the prior generation’s best.

Note: For anyone budgeting AI-assisted analysis or planning tools: the cost floor drops with every release cycle, while capability rises. Waiting for the “right time” to evaluate these tools means evaluating against a moving target that keeps getting cheaper.

Sources: Anthropic

Nvidia’s New Chip Delivers 50x More Throughput Per Megawatt — While $35 Boards Surge on AI Agent Demand

Nvidia’s Blackwell Ultra GB300 NVL72 achieves 50x throughput per megawatt and 35x lower cost per token compared to the previous Hopper generation. At the other end of the spectrum, Raspberry Pi stock surged 42% in a single day on speculation that its $35 boards could serve as lightweight hosts for AI agents. The compute cost curve is collapsing from both directions: hyperscale hardware getting radically more efficient, and edge devices becoming viable AI endpoints.

Note: When the high end drops 35x in cost per token and the low end runs on a $35 board, the range of what’s economically deployable expands fast. Projects dismissed as too expensive twelve months ago may already be within budget.

Sources: Nvidia, Reuters

Where the Infrastructure Money Is Going

Anthropic Will Pay Hyperscalers $80 Billion by 2029 — While Meta Locks In Multiyear Nvidia Chip Supply

Anthropic reportedly expects to pay Amazon, Google, and Microsoft at least $80 billion through 2029 to run Claude, with the cloud providers also taking a cut of revenue. Separately, Meta agreed to spend billions on Nvidia’s Blackwell and upcoming “Vera Rubin” chips in a multiyear deal — its first purchase of standalone Nvidia CPUs. These are not speculative investments. They are contractual commitments to infrastructure at a scale that locks in AI capacity for the rest of the decade.

Note: $80 billion from a single AI company to three cloud providers, plus Meta buying silicon years forward. The hyperscalers aren’t betting on AI — they’re underwriting it as core infrastructure. Any institution planning a five-year digital strategy is planning against this backdrop whether they realize it or not.

Sources: The Information, Financial Times

Google Secures 150 MW of Geothermal Power for Nevada Data Centers Through 2030

Ormat Technologies signed a power purchase agreement with NV Energy to deliver up to 150 megawatts of geothermal energy to support Google’s data center operations in Nevada through 2030. The deal represents a concrete step in the race to secure clean, baseload power for AI infrastructure — a constraint that’s increasingly shaping where and how fast AI capacity can scale.

Sources: Ormat Technologies

Microsoft Commits $50 Billion to AI Infrastructure Across the Global South by 2030

Microsoft confirmed it is on pace to invest $50 billion in AI infrastructure across the Global South by 2030, expanding data center capacity and cloud services in regions that have historically lagged in digital infrastructure. The investment covers physical infrastructure, training programs, and partnerships with local institutions.

Note: When a single company commits $50 billion to build AI infrastructure in developing regions, the competitive baseline for digital capacity shifts globally. European institutions operating in or partnering with these regions will encounter a different technology landscape than the one they planned for.

Sources: Reuters

How You’ll Buy Software Is About to Change

Microsoft’s AI Chief: Most “Sitting at a Computer” Work Fully Automated Within 18 Months

Mustafa Suleyman, Microsoft’s head of AI, predicted that most work involving “sitting down at a computer” will be fully automated within 18 months. The statement comes from the company currently deploying Copilot to hundreds of millions of enterprise users and embedding AI across the Office suite, Azure, and Windows.

Note: This isn’t a startup founder pitching investors. It’s the AI lead at the company that supplies productivity software to most of the world’s institutions. Whether the timeline is exact matters less than the planning implication: if even half this prediction holds, workforce planning, training budgets, and role definitions need to be revisited now — not after the tools arrive.

Sources: Fortune

Per-Seat Software Licensing Is Dying — Agents Don’t Need Logins

Per-seat SaaS licensing is giving way to consumption-based pricing as AI agents replace human users across enterprise software. The billing unit is shifting from “user” to “tasks completed” and “tokens consumed.” Snowflake and Databricks are already pricing this way. The shift threatens the predictable recurring revenue model that has defined enterprise software valuations — and the procurement assumptions institutions have built around it.

Note: Procurement teams that negotiate software contracts by headcount are negotiating in a unit of measurement that’s becoming obsolete. Consumption pricing means costs scale with usage, not seats — which can be better or worse depending on how your organization actually uses the tools. Worth understanding before the next renewal cycle.

Sources: Financial Times

Similar Posts