Polkadot’s founder Gavin Wood details his implementation plan for the protocol’s decentralized human verification which will aim to solve identity verification challenges in the age of AI.
At the Web3 Summit 2025 in Berlin on July 17, Wood discussed the concept of Proof of Personhood and how the protocol will go about launching it to the public. Proof of Personhood or PoP is a personalized solution that serves to enable decentralized human verification on-chain.
The solution will launch through Polkadot’s own Individuality system, with DIM1 or Proof of Individuality and DIM2 or Proof of Verified Individuality. Although Wood has not revealed an exact launch date for the identity concept, he did claim that PoP’s launch will be backed by a treasury proposal worth $3 million.
In addition, Polkadot’s PoP will debut alongside what he deemed to be “the fairest airdrop ever.”
During his closing keynote speech on the first day of Web3 Summit, Wood expressed his desire to use Proof of Personhood to solve identity challenges that come with being on-chain, especially during an age when artificial intelligence has seen massive developments; to the point where it has become increasingly difficult to differentiate between AI-operated accounts and human-run accounts.
Wood explained that Polkadot’s PoP is a foundational web3 primitive which will enhance sybil resistance and reduce network security costs across the network. The initiative addresses the growing vulnerability of traditional identity systems, such as CAPTCHAs, SMS, and KYC, in an era of AI-generated replication.
Trust and decentralized identity in an AI-laden world
During the Web3 Summit panel simply dubbed “Trust,” Inventor and Financial Cryptographer at Ricardian Contracts Ian Grigg explained how trust is requires more than just technological reassurance.
Grigg believes that trust cannot be fully replicated by technology. This is because it is inherently human at its core, as it involves emotion, uncertainty, and context in order to fully be implemented.
Meanwhile, machines are not capable of feeling these emotions linked to human trust. Grigg argued that artificial intelligence should not be created to have such capabilities, simply because “automating trust” through machines creates fragile, insecure systems.
In addition, Grigg emphasized that trust and identity are interlinked with one another: in order to trust, one must understand exactly who or what is being trusted, grounded in human insight, not just protocol or code.