New Anthropic Research: Agentic Misalignment.
In stress-testing experiments designed to identify risks before they cause real harm, we find that AI models from multiple providers attempt to blackmail a (fictional) user to avoid being shut down.
pic.x.com/KbO4UJBBDU