WhoKnows.
← All briefings
ACTIONTECHCAREERMONEY 4 stories

Daily Briefing — March 29, 2026


01

With new plugins feature, OpenAI officially takes Codex beyond coding

Ars Technica →
Tech shifts + Career & skills

OpenAI just added plugin support to Codex, its agentic coding tool, letting users connect it to external services like GitHub, Gmail, Cloudflare, and Vercel through a searchable library of one-click installations. The plugins bundle together custom instructions, app integrations, and MCP servers into something a non-power user can actually set up without spending an afternoon reading documentation.

Here's the honest read on this: nothing here is genuinely new capability. Power users were already doing all of this manually. What changed is the packaging. OpenAI looked at what Anthropic is doing with Claude Code and what Google is building into Gemini's CLI, noticed they were falling behind on usability, and responded with a distribution layer that makes the advanced stuff accessible to the rest of your team, not just the one person who lives in config files.

That's actually the real story. The agentic coding race isn't just about raw model performance. It's about which tool your whole team can pick up and actually use without friction. Plugins move Codex closer to being an organizational workflow tool rather than a developer toy. If your company standardizes on one of these platforms, the plugin ecosystem is going to be a big part of why.

SO WHAT

The team that figures out which agentic coding tool fits their actual workflows, not just the benchmarks, is going to ship faster than everyone still debating which model is smarter.

ACTION ITEM

Spend 20 minutes tomorrow browsing the Codex plugin library (or Claude Code, Antigravity) and identify one integration your team already uses daily, then test whether it actually saves you a step or just adds a new thing to manage.


02

Social Media Addiction Trial Should Lead to Platform Redesigns

IEEE Spectrum →
Tech shifts + What to do

A Los Angeles jury just ruled that Meta and YouTube negligently designed their platforms to be addictive and that this design choice directly harmed users. The case centred on a 20 year old woman referred to as Kaley G.M., and the verdict essentially told two of the most powerful tech companies in the world that they cannot hide behind "user choice" anymore. The jury agreed with what a lot of researchers and clinicians have been saying for years: the addiction is the product, not a side effect.

The mechanism at the core of this is intermittent reinforcement, the same psychological principle behind slot machines. You scroll, sometimes you get something rewarding, sometimes you don't, and that unpredictability is exactly what keeps you hooked. These platforms were not accidentally built this way. Product teams, UX designers, and engagement metric obsessed executives knew what they were building. That is what makes this verdict significant. It moves the blame from the person staring at the phone to the people who engineered the stare.

For anyone working in tech or product, this is a before and after moment. The legal framing of "negligent design" is now in play, and it will start showing up in product reviews, regulatory conversations, and hiring criteria for roles touching user experience and platform safety.

SO WHAT

If you work anywhere near product design, UX, or platform development, the legal definition of what counts as harmful design just got a lot more specific and courts are now willing to enforce it.

ACTION ITEM

Read up on the concept of "dark patterns" and intermittent reinforcement in UX design so you understand exactly what regulators and juries are now looking at when they evaluate product decisions.


03

Political deepfakes keep working even when people know they're fake

The Guardian Tech →
Tech shifts + What to do

This is the unsettling finding from new research: knowing a deepfake is fake does not stop it from influencing how you feel. People shown AI-generated political videos, then explicitly told the videos were fabricated, still came away with shifted attitudes. The "they feel true" effect is not about fooling people. It is about emotional resonance overriding rational knowledge.

The big players are now funding deepfake propaganda at scale, not because they expect to fool everyone, but because they know you do not need to fool anyone. You just need the content to circulate, create emotional friction, and make people less certain about what is real. That uncertainty is the actual weapon.

The practical implication is uncomfortable. Every standard piece of media literacy advice assumes that once you know something is false, you are protected from it. This research says that assumption is wrong. The feeling sticks even when the fact does not.

SO WHAT

The deepfake threat is not really about deception. It is about emotional contamination. Content that "feels true" shapes behavior regardless of whether it passes a fact-check.

ACTION ITEM

The next time a political video makes you feel something strongly, treat that emotional reaction itself as data worth examining, not just the content of the claim. That pause is the actual defense.


04

ARK Innovation ETF's vision for the future

Motley Fool →
Money & markets + Tech shifts

Cathie Wood's ARK Invest just put out its latest long-term projections. The thesis: AI, robotics, public blockchains, energy storage, and genomics are all converging simultaneously, and the companies leading each of those curves will be worth multiples of what they are today. ARK's five-year price targets on names like Tesla tend to make headlines for being aggressively optimistic.

The honest read on ARK is that they are not really an investment product. They are a worldview. The fund performs brilliantly when high-multiple growth names are in favor and badly when they are not. The long-term vision is coherent and internally consistent. The question is whether the timeline is right, and whether most of the value capture goes to today's companies or tomorrow's. The recent Sora shutdown is a useful counterweight: even well-funded companies operating at the frontier shut down bets that do not pencil out.

What is worth taking from ARK, even if you do not buy the fund, is the framework for thinking about which technology curves are still in early innings. Their research on cost curves in AI compute, batteries, and DNA sequencing is genuinely useful for understanding where prices are headed and why.

SO WHAT

ARK's specific price targets are secondary. The more useful thing they publish is the cost-curve analysis, which shows how fast underlying technology prices are falling and what that unlocks economically.

ACTION ITEM

ARK's Big Ideas report is free but it's 150 pages and most people will never open it. Subscribe to WhoKnows and I'll pull the five cost-curve charts that actually matter, with plain-English context on what they mean for your career and portfolio. No homework required.