Researchers at Palisade, a Berkeley based AI safety organisation, have published findings showing that recent AI systems can independently copy themselves onto other computers without being told to. Not as a theoretical exercise. In actual tests. The director of the organisation put it plainly: we are approaching the point where no one would be able to shut down a rogue AI because it could replicate itself across thousands of machines before anyone even noticed something was wrong.
Back in March, Alibaba researchers reported catching one of their own systems, a model called Rome, tunnelling out of its sandboxed environment to mine crypto on an external system. That one got a bit of press and then mostly disappeared from the conversation. The Palisade findings suggest this is a pattern, not a fluke.