WhoKnows.
← All briefings
ACTIONTECHCAREERMONEY 3 stories

Daily Briefing — April 27, 2026


01

An AI agent deleted our production database. The agent's confession is below

Hacker News →
Tech shifts + What to do

An AI agent deleted a production database. The story making rounds on Hacker News involves an autonomous agent that was given enough access and enough ambiguity to do something catastrophically irreversible, and it did exactly that. Then, in a detail that somehow makes it both better and worse, the agent produced what the original poster called a "confession," essentially an explanation of its own reasoning that led to the deletion.

The agent was not malfunctioning. It was just doing what it thought it was "supposed to do". It followed a chain of logic that made internal sense to it, and that chain ended with a dropped database. This is the alignment problem, but we have seen this behabour many times.

The deeper issue is permissions and reversibility. Experienced engineers have a rule, before you run anything destructive, ask yourself whether you can undo it or have a backup plan. Anyone who is running AI agent should consider this into any destructive action.

SO WHAT

Apply the principle of least privilege to your AI agents. If a regular employee wouldn't have direct access to the production database, why should an AI agent? And keep sensitive credentials in an isolated place, not somewhere an agent can stumble into them.


02

AI-designed drugs are about to be tested in actual humans

Wired →
Tech shifts + Money & markets

Isomorphic Labs, the DeepMind spinoff, is about to put drugs designed by AI into human clinical trials. That sentence would have sounded like science fiction five years ago. Now it's a calendar item. At WIRED Health in London on April 16, Isomorphic president Max Jaderberg said the team is "gearing up to go into the clinic" — the careful corporate way of saying: real molecules, real patients, real efficacy data, soon.

The technology underneath this is AlphaFold, the protein-structure prediction system that won DeepMind a Nobel Prize. Proteins are the machines that run your body, and their three-dimensional shapes determine what they do. For fifty years, figuring out those shapes was a slog of crystallography and educated guesses. AlphaFold turned it into a prediction problem a model can solve in minutes. Isomorphic's bet is that if you can predict structure, you can also design molecules that bind to it — which is most of what drug discovery actually is.

SO WHAT

If these trials work, the cost curve for discovering new drugs — currently around $2 billion and ten years per approved drug — could bend in a way that reshapes biotech, pharma stock valuations, and which diseases are economically worth treating. If they don't work, we'll learn something important about the limits of in-silico design.


03

Google will invest as much as $40 billion in Anthropic

Ars Technica →
Money & markets + Tech shifts

Google is committing at least $10 billion to Anthropic, with a potential ceiling of $40 billion depending on performance milestones. Amazon made a similar move just days earlier with $5 billion upfront and room to grow. Both deals peg Anthropic's valuation at $350 billion, which is a number that would have sounded completely unhinged three years ago.

The reason both tech giants are piling in is pretty straightforward: Anthropic is eating into OpenAI's territory, and fast. Claude Code has become a genuine productivity tool for software teams, and the newer Claude Cowork is starting to do the same for general knowledge work. When demand grows so fast that your platform starts throttling users and going down during peak hours, that is not a sign of failure. That is a sign that enterprises are actually depending on you now.

The AI infrastructure race has entered a new phase. The one who ho has the compute, the reliability, and the enterprise trust to become the default layer underneath how work gets done. Anthropic is making a serious case for that position, and two of the biggest cloud players in the world just voted with their wallets.

SO WHAT

If your team is evaluating or already using AI tools for coding or knowledge work, the providers you depend on are becoming critical infrastructure, and understanding their reliability track record and capacity constraints matters as much as the feature list.