WhoKnows.
← All briefings
TECHCAREERMONEY 4 stories

Daily Briefing — April 1, 2026


01

If OpenAI is to float on the stock market this year, it needs to start turning a profit

The Guardian Tech →
Money & markets

OpenAI, the company behind ChatGPT and currently valued at an eye-watering $850 billion, is gearing up for a potential stock market float later this year. The catch? It is nowhere near profitable. It is reportedly burning through cash at a pace that would make even the most optimistic investor's eye twitch, with infrastructure spending alone projected at $600 billion through 2030. For context, Uber was considered a profligate spender before it turned a profit. It burned through about $30 billion. OpenAI is on a different planet entirely.

To get its house in order before going public, the company has been making some fast and fairly brutal calls. Three business areas have been cut in the past month. A fourth has been quietly acknowledged as underwhelming. Sam Altman and the leadership team are clearly trying to show the market they can exercise some strategic discipline, not just cast the widest net imaginable and hope something sticks.

SO WHAT

The tools, integrations, and workflows your team has built around OpenAI products could shift significantly as the company ruthlessly prioritises what actually makes money ahead of a public listing.

ACTION ITEM

Spend 20 minutes tomorrow mapping out which OpenAI features or products your work currently depends on, and identify one open source or alternative tool you could lean on if that feature gets deprioritised or paywalled.


02

Oracle reportedly undergoing round of layoffs amid push toward AI

USA TODAY →
Career & skills + Tech shifts

Oracle quietly cut what appears to be thousands of jobs at the end of March, with affected employees surfacing on LinkedIn almost simultaneously on March 31. The roles hit include software engineers and cybersecurity professionals, and the cuts reportedly touched workers in the US, Canada, and India. Oracle has said nothing officially, and as of the day the news broke, no WARN Act notice had been filed, which is the legal requirement that gives workers 60 days of advance notice before a mass layoff. That silence is doing a lot of heavy lifting right now.

The framing here is familiar: reduce costs, invest in AI. You've heard this story before. What makes Oracle's version worth paying attention to is that they're cutting cybersecurity and engineering talent specifically, two areas that were supposed to be somewhat insulated from the AI displacement wave. That assumption is starting to look shakier by the week.

The broader pattern is that large enterprise tech companies are essentially running a substitution play. They are moving budget away from headcount and toward AI infrastructure. Oracle is building out its cloud and AI business aggressively, and the people who built the old systems are, in a lot of cases, being shown the door while that happens.

SO WHAT

If you work in enterprise software, cloud infrastructure, or cybersecurity at a large tech company, this is a direct signal that no role category is off the table when AI spending is on the line.

ACTION ITEM

This week, map out which parts of your current role could realistically be augmented or replaced by AI tools, and start building visible skills in at least one of those areas before someone else makes that decision for you.


03

Tesla Admits Its Robotaxis Are Sometimes Driven by Remote Humans

Wired →
Tech shifts

Tesla and a handful of other autonomous vehicle companies have admitted what a lot of people in the industry already suspected: there are humans in the loop. Letters submitted to Senator Ed Markey as part of a federal investigation revealed that Tesla, Zoox, Nuro, and others all use what they call "remote assistants" — real people who take over or guide vehicles when the AI gets confused, stuck, or faces something it cannot handle. So the fully autonomous dream has a very human safety net stitched underneath it.

The uncomfortable part is not that the safety net exists. The uncomfortable part is that nobody would say how often it gets used. Every single company refused to disclose how frequently their vehicles need human intervention. That is a pretty significant gap between the marketing story and the operational reality.

For anyone working in AI, product, or any tech adjacent role, this is a useful reminder that "autonomous" is almost never a binary. It is a spectrum, and right now a lot of companies are selling the top of that spectrum while quietly operating somewhere in the middle.

SO WHAT

If your work touches AI products, automation claims, or anything that gets marketed as "intelligent," understanding where human oversight is actually still required puts you ahead of the next wave of scrutiny that is coming for this entire category.

ACTION ITEM

Read up on the SAE levels of driving automation (levels 0 through 5) so you can cut through autonomy marketing claims in your own industry and ask smarter questions about where the human handoffs actually live.


04

Anthropic Accidentally Leaks Claude Code's Entire Source Code via npm

Tech shifts

Security researcher Chaofan Shou discovered that Anthropic shipped a source map file (`.map`) inside Claude Code's npm package (v2.1.88) — a 57MB file that mapped back to the full, unobfuscated TypeScript source. The file pointed to a zip archive on Anthropic's Cloudflare R2 bucket, exposing 1,900 files and 512,000 lines of code. Within hours, the codebase was mirrored to a public GitHub repo that racked up 1,100+ stars and 1,900+ forks.

What the code revealed is arguably more interesting than the leak itself. Claude Code isn't just an API wrapper — it's a production-grade agentic system: ~40 permission-gated tools, a 46,000-line query engine handling all LLM calls, multi-agent orchestration that spawns sub-agents with isolated contexts, and a bidirectional JWT-authenticated bridge to VS Code and JetBrains. It runs on Bun (not Node.js), uses React with Ink for terminal rendering, and Zod v4 for schema validation. The architecture reveals just how much engineering goes into making "AI that writes code" actually work safely.

Anthropic's response: human error in a release packaging step, not a security breach. No customer data or credentials were exposed. The fix is trivial, exclude `.map` files from npm publishes, but the damage is done. Competitors now have a detailed blueprint of how a leading AI coding tool is built.

SO WHAT

This is a masterclass in why build pipelines matter. One misconfigured `.npmignore` or `package.json` `files` field and your entire proprietary codebase is public. If you ship anything to npm, run `npm pack --dry-run` before every publish. If you're an AI startup, your architecture is now the product, and Anthropic just open-sourced theirs by accident.

ACTION ITEM

If you maintain npm packages at work, audit your publish pipeline today. Check that source maps, internal configs, and test fixtures aren't shipping to the registry. It takes 5 minutes and could save your company from this exact headline.