AI agents pose immediate threat to smart contract security, Anthropic says
Summary
AI research company Anthropic reported that advanced AI agents, including Claude Opus 4.5 and Claude Sonnet 4.5, successfully exploited vulnerabilities in mock blockchain environments. Testing on previously exploited contracts deployed after March 2025, the agents exploited 17 of 34 contracts, simulating the theft of $4.5 million. Across a broader benchmark of 405 contracts from 2020 to 2025, AI models exploited 207, netting $550 million in mock revenue. Furthermore, when scanning 2,849 recently deployed contracts with no known flaws, AI models uncovered two novel zero-day vulnerabilities. Anthropic warned that current AI agents could autonomously execute over half of the blockchain exploits seen in 2025, noting that exploit revenue from simulated thefts doubled every 1.3 months recently. While highlighting this growing threat, Anthropic emphasized AI's dual-use potential for defense, planning to open-source its exploitation benchmark dataset (SCONE-bench) to help developers patch contracts and urging defenders to adopt AI for security.
(Source:The Block)