WhoKnows.
← All briefings
ACTIONMONEYCAREERTECH 3 stories

Daily Briefing — April 23, 2026


01

Anthropic tested removing Claude Code from the Pro plan

Ars Technica →
Money & markets + Career & skills

Anthropic quietly tested pulling Claude Code from its $20 per month Pro plan, and developers noticed immediately. New signups found themselves locked out of the agentic coding tool, while existing subscribers kept access without any disruption. Cue the Reddit threads, cue the X posts, cue the general sense that something sketchy was happening.

Anthropic's head of growth eventually surfaced to explain what was going on, and his reasoning was actually pretty honest. The Max plan launched a year ago as a heavy chat tier, full stop. Then Claude Code got bundled in, Opus 4 made it genuinely useful, long running async agents became a real workflow for a lot of people, and suddenly the economics of what a $20 subscriber actually consumes look nothing like what Anthropic originally modelled. Usage per subscriber is way up. The math stopped working.

This is a pattern you are going to see more of across AI tooling. These companies built their pricing on assumptions about how people would use their products, and reality turned out to be much more demanding. Agentic workflows especially are nowhere near as cheap to run as a chat session. Something had to give, and the question now is just how far the repricing goes.

SO WHAT

If your day to day work depends on AI coding tools, the floor on what these cost is shifting, and "it was included in my plan" is not a guarantee you can build a workflow around.

ACTION ITEM

Audit which AI tools your team actually relies on daily and check whether they sit inside a subscription that has any history of mid cycle changes, so you are not caught flat footed when the next test rolls out.


02

SpaceX doubles down on AI with its potential $60 billion Cursor buy

Fast Company Tech →
Money & markets + Tech shifts

SpaceX just announced it has entered a working relationship with Cursor, the AI coding startup, and has the option to acquire it for $60 billion. If SpaceX decides to walk away from the deal, it still pays Cursor $10 billion for its work. Either way, a whole lot of money is moving toward AI that writes and debugs code.

SpaceX acquired Elon Musk's xAI just two months ago, the company behind the Grok chatbot. Now it is layering in Cursor, which competes directly with tools like Anthropic's Claude Code and OpenAI's Codex. SpaceX also has what it calls the Colossus supercomputer, roughly equivalent to a million H100 GPUs. The pitch is basically: take Cursor's product and its existing user base of serious software engineers, plug it into that compute firepower, and build something genuinely dominant in the AI coding space.

The IPO angle is not accidental either. Whether or not the Cursor deal closes at full price, SpaceX is clearly signalling that it wants to be taken seriously as an AI company, not just a rocket company.

SO WHAT

If you work in software development or manage a team that does, the AI coding tool landscape is consolidating fast around players with serious compute backing, which means the tools your team uses today could look very different within 12 months.


03

Google unveils two new TPUs designed for the "agentic era"

Ars Technica →
Tech shifts + Money & markets

Google just announced its eighth generation of Tensor Processing Units, and this time they split the chip into two distinct versions: the TPU 8t for training and the TPU 8i for inference. The training chip can link up to a million chips into a single logical cluster, and Google says it can cut frontier model training time from months down to weeks.

The framing Google is leaning into is the "agentic era," which is their way of saying that AI workloads are changing shape. Inference, meaning the part where you actually run a model and get outputs, looks very different from training. Splitting the hardware to match those two jobs is a sensible engineering call, and it signals that Google sees these as genuinely different problems now rather than just two points on the same pipeline.

Google is essentially building a parallel universe to Nvidia's dominance in AI accelerators. Most of the industry is in a scramble for H100s and B200s, but Google is quietly running its own stack at massive scale. That changes the competitive dynamics for every cloud provider, every AI startup, and frankly every team that has to decide where to run their models.

SO WHAT

If your team is evaluating cloud infrastructure for AI workloads, Google's decision to split training and inference into dedicated hardware is about to make those conversations more complicated and more interesting.