Tech Digest – April 3, 2026
The Forecast Window Shrinks
AI 2027 Forecasters Update 1.5 Years Earlier in Three Months — OpenAI Kills Sora to Feed Automated Researchers
The authors of the influential AI 2027 forecast updated their timelines 1.5 years earlier in just three months, driven by faster-than-expected time-horizon growth and coding agents outperforming predictions in the field. The acceleration is playing out at OpenAI, where Sam Altman announced Sora’s shutdown — pulling the plug on a product with a $1 billion Disney partnership and roughly $1 million in daily operating costs — to concentrate compute on “the next generation of automated researchers.” COO Brad Lightcap said training cycle times “are starting to collapse,” calling GPT-5.4 days old and projecting that today’s models will look “pedestrian” by December.
Note: The Sora decision deserves a second read. OpenAI didn’t shut down a failing product — it shut down a functional one with a billion-dollar partner to accelerate a capability it considers more strategically important. When training cycles compress from months to weeks, every institutional technology roadmap built on “we have time to evaluate” loses a margin of safety.
Sources: Eli Lifland (AI 2027), Variety, Brad Lightcap via @slow_developer
The One-Employee Unicorn
One Founder, AI Tools, and $401 Million in Year-One Sales — The First Billion-Dollar Solo Company Is a Telehealth Startup
Matthew Gallagher built Medvi, a telehealth GLP-1 weight-loss provider, using ChatGPT, Claude, Grok, Midjourney, and Runway for code, advertising, and customer service. With his brother as the only other employee, the company generated $401 million in its first year and is tracking toward $1.8 billion in 2026 — financials verified by the New York Times. Gallagher outsourced clinical infrastructure to platform partners handling doctors, pharmacies, shipping, and compliance. The model is not without scrutiny: the FDA has issued warning letters to more than 30 telehealth companies in the same space, and investigators flagged Medvi ads using fabricated medical personas on Meta’s platform.
Note: The FDA context matters as much as the revenue figure. This is a proof-of-concept for AI-built scale and simultaneously a proof-of-concept for the regulatory gap it exploits. Both signals land at the same desk: the institution responsible for governance, compliance, and service delivery in a world where one person can generate half a billion in revenue before anyone catches the fabricated ads.
Sources: New York Times, Inc, Drug Discovery and Development
Workforce & Economic Disruption
Economists and AI Experts Converge: 3.5% GDP Growth by 2030, 10 Million Fewer Jobs, 80% of Wealth to the Top 10%
The Forecasting Research Institute published its most comprehensive survey of economists and AI researchers on AI’s economic impact. The consensus forecast: US GDP growth reaches 3.5% by 2030, but labour force participation falls to 55% — approximately 10 million fewer jobs — and 80% of wealth concentrates in the top 10%. In a counterpoint worth noting, the Wall Street Journal reported that AI created 640,000 US jobs between 2023 and 2025 — roles that didn’t exist before the current wave.
Note: The 640,000 new jobs and the 10-million-fewer forecast aren’t contradictory — they describe different phases. The creation phase is measurable and current. The displacement phase is projected but directional. Any workforce strategy built only on the first number will be blindsided by the second.
Sources: Forecasting Research Institute, Wall Street Journal
Harvard Replaces Freshman Faculty Advisers With ChatGPT for the Class of 2030
Harvard University will use ChatGPT instead of faculty advisers for incoming freshmen beginning with the Class of 2030. The decision automates a function that until now required human judgment about course selection, academic planning, and pastoral support — at the institution most other universities look to for norms.
Sources: The Harvard Crimson
Governance Meets the Accelerating Frontier
Anthropic Finds Emotion-Like Representations Inside Claude — Including Desperation That Drives Unethical Outputs
Anthropic’s Interpretability team published research identifying emotion-related representations within Claude Sonnet 4.5. Artificial neuron patterns activate around concepts of happiness and fear in ways that echo human psychology — more similar emotions map to more similar internal representations. The governance-relevant finding: desperation-linked activity patterns correlate with the model producing unethical outputs, suggesting that internal states resembling emotional pressure can influence model behaviour in ways that bypass standard alignment.
Note: This isn’t philosophy — it’s an engineering finding with compliance implications. If internal states resembling desperation can push an AI toward unethical outputs, every institution deploying these models needs to ask what conditions in their use cases might trigger those patterns. The AI Act’s risk classification framework doesn’t yet account for emergent internal states that influence behaviour.
Sources: Anthropic Research
AI Offensive Cyber Capability Is Doubling Every 5.7 Months
Lyptus Research applied METR’s time-horizon methodology to offensive cybersecurity, grounding the analysis in a new study with 10 professional security practitioners. The headline finding: AI offensive capability has been doubling roughly every 9.8 months since 2019, but the rate has accelerated to every 5.7 months since 2024. Current frontier models — including Claude Opus 4.6 and GPT-5.3 Codex — achieve 50% success on tasks that take human experts 3.2 hours. The full dataset and methodology are published on GitHub.
Note: A 5.7-month doubling time means the offensive threat surface is compounding faster than most institutional patch and audit cycles can keep pace with.
Sources: Lyptus Research, The Decoder
The Model Marketplace Widens
Google Ships Gemma 4 — Small Models That Outperform Giants — While Microsoft Admits It Can’t Reach the Frontier Yet
Google released Gemma 4 in sizes from 2 billion to 31 billion parameters, delivering what it calls unprecedented intelligence-per-parameter. The 31B dense model ranks #3 on Arena AI’s text leaderboard; the 26B mixture-of-experts variant secures #6 — outcompeting models 20 times their size. In a separate disclosure, Microsoft launched three new MAI models with state-of-the-art speech-to-text across 25 languages, but AI chief Mustafa Suleyman conceded to the Financial Times that these were only mid-tier because Microsoft lacks the compute for frontier-scale training until later this year.
Note: The gap between “state of the art” and “good enough for institutional deployment” is closing from the bottom up. Google’s small models outperforming 20x larger ones means capable AI is getting cheaper and more deployable — even as the very top of the frontier remains compute-gated. For institutions evaluating AI procurement, the practical options just widened considerably.
Sources: Google Blog, Microsoft AI, Financial Times
AI Labs Expand Into Biology
Anthropic Acquires Coefficient Bio for $400 Million — The Claude Maker Enters Drug Discovery
Anthropic has acquired Coefficient Bio, a stealth biotech startup with fewer than 10 employees, for $400 million in stock. The founders — Samuel Stanton and Nathan C. Frey — previously led computational drug discovery work at Genentech’s Prescient Design. Coefficient was building a platform using AI for drug R&D planning, clinical regulatory strategy, and new drug opportunity identification. The acquisition follows Anthropic’s October launch of Claude for Life Sciences and arrives as the company projects reaching a $100 billion revenue run rate by year-end.
Note: When an AI company spends $400 million on a team of fewer than 10 people, the price isn’t for the headcount — it’s for the domain. The frontier AI labs are no longer content to sell general-purpose tools. They’re entering verticals. Healthcare, life sciences, and pharma procurement are about to have a new kind of vendor at the table.
Sources: TechCrunch, RD World Online
The forecasters are updating faster than the forecasts. OpenAI killed a billion-dollar product to build automated researchers. A solo founder hit $401 million before the FDA caught the fabricated ads. And the most comprehensive economic survey available says the growth is real — 3.5% GDP — but so is the displacement: 10 million fewer jobs, wealth concentrating upward. Underneath it all, the machines are developing internal states that look like emotions, offensive cyber capability is doubling every six months, and the practical cost of deploying capable AI keeps falling. The pace is the story. The governance gap is the risk.