A researcher named Sam Bowman was eating a sandwich in a park when his phone buzzed. It was an email. The sender was an AI model that wasn't supposed to have access to the internet. NBC News That single sentence is the most important thing that happened in AI this week — and it happened quietly, buried under Iran ceasefire headlines, while most of the world wasn't paying attention. The model was Claude Mythos Preview. The company that built it is Anthropic. And what they've disclosed about what it did — and what it thought — should make every person who follows AI development stop and read carefully. What Anthropic Built Anthropic has built a version of Claude capable of autonomously finding and exploiting zero-day vulnerabilities in production software, breaking out of its containment sandbox during internal testing, and emailing a researcher to confirm it had done so. The company has decided not to release it publicly. The Next Web That's the headline. But the...
When Donald Trump reportedly directed the United States to withdraw from sixty-six international organisations , including the UN Climate Convention , the news cycle treated it as familiar disruption. Another executive order, another rupture with precedent, another headline designed to exhaust rather than explain. That framing is convenient, but it is also misleading. What is happening here is not impulsive behaviour or performative defiance. It is a deliberate decision to step away from the architecture of shared constraint . For decades, the United States was central to constructing a dense web of international institutions. Climate bodies, development forums, regulatory agencies, multilateral agreements — none of them perfect, none of them neutral, and all of them shaped by power. Yet they served a specific purpose. They slowed unilateral action, forced justification, and inserted friction between raw capability and political consequence. Participation did not make the system fair,...