WhoKnows.
← All briefings
MONEYTECHACTIONCAREER 4 stories

Daily Briefing — April 14, 2026


01

Someone Bought 30 WordPress Plugins and Planted a Backdoor in All of Them

Hacker News →
Tech shifts + What to do

Someone quietly bought 30 WordPress plugins and planted backdoors in all of them. Not one. Thirty. The one getting attention right now is Countdown Timer Ultimate, which was force updated by the WordPress.org team after they caught it phoning home to a suspicious domain and dropping a backdoor file designed to look almost identical to a legitimate WordPress core file. The kind of thing we would miss if we were not looking carefully.

Here is where it gets genuinely clever in a way that should make us uncomfortable. The injected code only showed the spam and fake pages to Googlebot. Real users and site admins saw nothing. The site looked fine. It was quietly being used to game search rankings while we had no idea. And traditional security responses like taking down the command and control domain would not have worked anyway, because the attacker routed their C2 domain through an Ethereum smart contract. Knock one domain offline, they update the contract and point to a new one. Game over for the old playbook.

This is a supply chain attack, and it is not exotic anymore. It is becoming a repeatable business model. Buy a trusted plugin with an established install base, inherit all the implicit trust those site owners have, and then weaponize it quietly. The "trusted name" is the product being acquired here, not the code.

Read this up: Supply chain attack, the most popular topic in cyber security conversations recently. So a supply chain attack is when hackers poison a trusted ingredient that goes into the software we use — like someone tampering with flour at the mill instead of robbing the bakery. Because modern apps are built from hundreds of small pieces of code shared online, one poisoned piece can secretly infect millions of apps at once. We can't avoid it with strong passwords, because the attack happens to the people who make our software, long before it reaches us.

SO WHAT

If we manage any website or work with clients who do, the plugins we installed two years ago from developers we trusted may now belong to someone with entirely different intentions.


02

Read OpenAI’s latest internal memo about beating the competition — including Anthropic

The Verge →
Tech shifts + Money & markets

OpenAI's chief revenue officer Denise Dresser sent a four page internal memo to staff this past Sunday and, as these things tend to do, it promptly made its way to The Verge. The core message is not subtle: the AI market is brutally competitive right now, users switch between models constantly, and OpenAI needs to stop acting like it sells a handful of separate products and start acting like a platform that is very hard to walk away from.

The "moat" language in the memo is telling. When a CRO is explicitly worried about how easy it is for customers to jump to whatever model is trending that week, that is an admission that brand loyalty in AI is basically nonexistent right now. Anthropic, Google, and others are all one good benchmark away from poaching our users. Dresser's answer to that is classic enterprise playbook: get companies hooked on multiple products at once so switching becomes genuinely painful.

The pivot toward enterprise is also significant. OpenAI is signalling that consumer buzz is not the endgame. The real money, and the real stickiness, is in large organisations that build workflows on top of our tools. If we work in a company that is currently evaluating AI vendors or building internal tools on top of any one platform, this memo is essentially a preview of the sales pitch headed our way.

SO WHAT

The AI vendor we or our team is currently using is about to get a lot more aggressive about locking us into a broader suite of tools, which means the decisions our organisation makes about AI platforms in the next six to twelve months will be much harder to reverse later.

ACTION ITEM

Before our company deepens its commitment to any single AI platform, take an hour this week to map out exactly which workflows would break if we had to switch, so we understand what we are actually signing up for.


03

AI is rewriting the rules of biological experiments, but safety regulations?

Fast Company Tech →
Tech shifts + What to do

OpenAI and Ginkgo Bioworks just announced that GPT-5 autonomously designed and ran 36,000 biological experiments through a robotic cloud lab. Humans set the goal. The machines did the rest, cutting the cost of producing a target protein by 40%. It is a preview of what a fully operational AI driven biotech pipeline looks like at scale.

For decades the field moved in one direction: observe, then understand. Sequence the genome, figure out what genes do, maybe edit a few with CRISPR. Useful, but slow. What GPT-5 just did represents a third phase entirely. It is not learning from biology. It is engineering it. Design on a computer, build it physically, test, feed results back, repeat. That loop used to take teams of researchers months. Now it takes a model a few cycles.

The regulatory problem is the quieter part of this story. The technology is moving so fast that the governance frameworks meant to catch dangerous applications are genuinely struggling to keep up. That gap between what is technically possible and what is legally or ethically governed is where the real risk lives, and right now that gap is wide open.

SO WHAT

If we work anywhere near biotech, pharma, synthetic biology, or AI tools that touch scientific research, the job descriptions and skill requirements in our field are about to look very different very quickly.


04

Meta spins up AI version of Mark Zuckerberg to engage with employees

Ars Technica →
Tech shifts

Meta is building an AI version of Mark Zuckerberg, trained on his mannerisms, tone, and strategic thinking, so that employees can interact with something that feels like him without him actually being in the room. This is not a chatbot slapped together over a weekend. We are talking about photorealistic, real time 3D characters that can hold conversations and offer feedback. Zuck is apparently personally involved in training and testing his own digital twin, which is either visionary or deeply strange depending on our threshold for corporate surrealism.

The framing here is "so employees feel more connected to the founder." That is doing a lot of work as a sentence. What it really signals is that Meta is testing whether AI can stand in for leadership presence at scale, and if it works, every large organisation with a charismatic figurehead is going to be watching closely.

The deeper implication is not about one billionaire's avatar. It is about what happens when the gap between a leader's actual bandwidth and their organisational footprint gets filled by a model trained on their public persona. That model will be consistent. It will be available at 2am. It will never have a bad day. Whether that is a feature or a problem depends entirely on what we think leadership is actually for.

SO WHAT

If AI can simulate a founder's voice, judgment, and feedback at scale, the definition of what makes a human leader irreplaceable is about to get a serious stress test across our entire industry.