AI Auditor Flags $2M Security Vulnerability Before Launch — A First for the Industry

Provided by PR DESK

The Block is hosting this content as a convenience for our readers. The Block does not necessarily endorse any of the statements made within the following announcement.

October 1, 2025 - New York, NY - A vulnerability in a decentralized lending protocol that could have drained roughly $2 million in total value locked (TVL) was discovered before launch - not by a human audit team, but by an AI auditor. Security observers are calling it the first public instance of an AI system surfacing a multi-million dollar flaw in DeFi.

The Exploit That Almost Happened

The vulnerability stemmed from a rounding mistake in the withdrawal function. When microscopic withdrawal requests were submitted, the system recorded “zero” against balances but still released tokens from reserves. With an automated loop, an attacker could have drained the pool entirely without ever posting collateral.

Had the code gone live, users would have faced frozen withdrawals, halted lending, and a collapse of confidence in the protocol’s reserves. Instead, the bug was flagged and patched before deployment.

From Human Review to Machine Discovery

Audits have long been a cornerstone of DeFi security. Protocols typically hire specialized engineers to review code line by line before a launch or major upgrade. While this process has prevented many incidents, it is slow, costly, and focused on producing a one-time assessment before deployment.

Even with audits in place, billions have still been lost to oversights - underscoring the limits of a process dependent on human attention alone.

The discovery of a $2M flaw by an AI system represents a shift. Until now, AI auditors have been discussed mostly as experimental tools. This marks the first widely reported case of such a system surfacing a bug with millions at stake, moving the concept from theory to practice.

An Industry Watching Closely

AI auditor solutions have been slowly emerging, and they approach security differently than their human counterparts. They can run continuously, surfacing math quirks and logic errors that may be overlooked during point-in-time audits. They do not replace human reviewers, but they add a new layer of coverage that can test every code change.

This latest find is seen as an early proof point that the category of AI auditors is beginning to take shape within Web3 security. Just as professional human audits became standard during DeFi’s growth, continuous AI-driven analysis may soon become an expected part of the process.

Sherlock at the Center of the First Big Example

Sherlock, a smart contract security platform, confirmed that its AI system identified the withdrawal bug and generated a structured report showing how the exploit would have worked.

“The novelty of this discovery is something we’re really proud of,” said Jack Sanford, CEO of Sherlock. “As far as we know, this is the first time an AI auditor has caught a multi-million dollar vulnerability before launch. It’s a sign that this technology is real and already impacting outcomes.” 

The company credits Sherlock security researcher @vagnerandrei98 with using Sherlock AI to make the discovery.

From One Bug to an Industry Shift

The bug never touched mainnet, but its discovery may mark the start of a new chapter for Web3 security. With projects handling billions in user deposits, the combination of human expertise and AI review is likely to become a new baseline.

For now, the incident stands as the clearest example yet of how AI auditors can shift outcomes - catching errors that could cost millions and preventing them before they ever reach production.

About Sherlock

Sherlock is a smart contract security platform positioning itself as a full lifecycle provider for smart contracts. The company combines researcher expertise, adversarial testing, AI reinforcement, and financial coverage. Last week, Sherlock rolled out Sherlock AI as part of its suite, placing it among the first firms to deploy AI auditors in live environments.