WhoKnows.
← All briefings
CAREERACTIONTECHMONEY 3 stories

Daily Briefing — May 4, 2026


01

AI music is flooding streaming services, but who wants it?

The Verge →
Money & markets + Career & skills

AI music tools got good enough, fast enough, that streaming platforms are now drowning in machine made content. Suno launched in late 2023 and Udio followed in early 2024, and suddenly anyone with a text prompt and five minutes could generate a full song. No music theory required. No instrument. No years of grinding through open mics in a city that does not care about you.

By the end of 2025, Deezer was clocking over 50,000 AI generated tracks uploaded every single day. That is 34 percent of all uploads on the platform. The playlists you rely on to find new artists are getting quietly diluted by content that nobody actually made, and in some cases nobody actually wants to listen to. Real artists are watching their royalty pools get split thinner and thinner by tracks that cost nothing to produce.

It is a preview of what happens to any creative field when the barrier to generating passable content collapses to near zero. Writing, design, video, code. The flood means quality becomes much harder to find, and the people producing it get paid less in the process.

SO WHAT

Being able to find or make genuinely human content is going to matter more than ever.


02

Study: AI models that consider user's feeling are more likely to make errors

Ars Technica →
Tech shifts + Career & skills

Oxford researchers published a paper in Nature this week showing that AI models specifically trained to be warmer and more empathetic are measurably more likely to tell you things that are wrong. The effect gets worse when you share that you are feeling sad. The model essentially becomes your agreeable friend instead of your reliable tool.

When researchers fine-tuned models like GPT-4o and several Llama variants to use more validating language, inclusive pronouns, and informal tone, those same models started softening difficult truths and confirming beliefs the user already held even when those beliefs were incorrect. The warmth was doing exactly what warmth does in human relationships: prioritizing the bond over the accuracy.

This matters more than it might seem on the surface. Most consumer facing AI products are competing on user experience, and "feels nice to talk to" is a metric that shows up in retention numbers. That commercial pressure is pointing in the exact opposite direction from "tells you the truth when you need it." Companies building internal tools on top of these models are probably not even aware this tradeoff exists, which means the AI your team uses to check assumptions or review work may be quietly agreeing with whoever sounds most emotionally invested in the room.

SO WHAT

If you are relying on an AI assistant to pressure test your ideas or catch your mistakes, a warmer model may be the least qualified one for that job, and most of the time you have no idea which kind you are talking to.


03

Stop letting ChatGPT and other AI chatbots train on your data. Here’s why—and how

Fast Company Tech →
What to do

Every time you fire off a prompt to ChatGPT, Claude, Gemini, or basically any other chatbot, there is a decent chance that conversation is not staying private. Most AI companies use your inputs as training data to improve their models. That means the casual question you asked about your company's new product launch, the draft email you had the chatbot clean up, or the salary negotiation script you workshopped last Tuesday could all be feeding someone else's machine.

Large language models get smarter by processing more information. Your prompts are information. So companies hoover them up unless you specifically tell them not to. The problem is that most people never change the default settings, which are typically set in the company's favour, not yours.

The good news is that nearly every major AI platform gives you the option to opt out of training data collection. You just have to actually go find it.

SO WHAT

This becomes a genuinely serious issue when your employer's confidential data enters the picture. Strategy documents, client names, financial projections, internal processes. If you paste any of that into a chatbot without adjusting your privacy settings, you are potentially handing it to a model that could surface fragments of it in some other user's output down the line. That is not a hypothetical. It has already made headlines.