todayonchain.com

Anthropic AI agents can now shatter smart contract security for just $1.22, exposing a terrifying economic reality

CryptoSlate
Anthropic's AI agents can autonomously reconstruct real-world smart contract exploits, demonstrating a significant and rapidly growing threat to DeFi security.

Summary

Anthropic's Frontier Red Team demonstrated that advanced AI models like Claude 3.5 Opus and GPT-5 can autonomously reconstruct 19 out of 34 real-world DeFi exploits from 2025 without prior knowledge of the vulnerabilities, reasoning through complex multi-step transactions.

Economically, the agents found novel zero-day vulnerabilities in BNB Chain contracts for an average inference cost of just $1.22 per contract, suggesting exploit revenue could double every 1.3 months. The speed of discovery is critical; agents can generate a working proof-of-concept exploit in under an hour, far outpacing traditional human audit cycles.

Anthropic open-sourced SCONE-bench to help defenders adopt continuous, AI-driven testing integrated into CI/CD pipelines. The conclusion is that defenders must shift from one-time audits to continuous adversarial engagement, integrating AI fuzzing and implementing safety mechanisms like pause switches, as the window for exploitation is rapidly shrinking.

(Source:CryptoSlate)