Here are five reflections on how AI is reshaping the foundation of trust, and how we can turn this challenge into one of civilization's greatest upgrades.
1. Defaulting to Disbelief.
The old saying "seeing is believing" is dead and buried. When I see any image or video these days that looks fishy, I instantly ask: "Is it real?"
This isn't a matter of being paranoid, it's pattern recognition. Google's Nano Banana can generate images for $0.039 per API call with character consistency that fools the human eye. VEO-3 is generating hyper-realistic 8-second video clips that pass the smell test for 99.9% of viewers. It's estimated that 90%+ of everything we see will be AI-generated by the end of 2026.
So, what do we do? We now have the opportunity to train our minds—and our tools—to seek deeper verification. While the cost of deepfakes is dropping fast, so too is the potential for scalable authentication.
Our new standard: "I've checked, therefore I believe."
2. Trust is not a want… it's a fundamental need.
Trust allows society to function peacefully, and democracy depends on having a shared reality. When citizens can't agree on basic facts because they literally can't trust their own eyes, the entire system breaks down.
Financial markets rely on trust. Legal systems depend on evidence. Scientific progress requires reproducible results.
Our call to action then is rather than fearing its erosion, AI-powered entrepreneurs must build new trust architectures: transparent markets with blockchain-verified transactions, scientific research instantly reproducible and peer-audited by AI, democratic systems rooted in verifiable truth.
3. AI image models are industrializing deepfakes.
We're moving from "someone with technical skills can create a fake video" to "anyone with $20 can generate thousands of fake videos in an afternoon."
But while deepfakes are easier than ever to create, and bad actors will weaponize them, large-scale detection is plummeting in price as well. What once took experts days can now be flagged by algorithms in seconds.
Just as email spam filters evolved, so will "truth filters." The same engines that generate synthetic content can also be harnessed to defend reality, empowering billions of people with real-time protection.
4. We need clear ethics, moral guidelines, and new laws to help protect truth and democracy.
The technology to verify authenticity already exists: cryptographic provenance, blockchain certification, digital watermarking. The opportunity is not just to prevent chaos but to create a new layer of the internet: an internet of trust.
But this layer needs to be built rapidly by the companies serving digital content to billions: Microsoft, Apple, Google, and Meta just to name a few. The business opportunity is huge, but time is of the essence.
By the time deepfakes seriously damage an election or crash a market, it may be too late to implement solutions. The infrastructure for truth verification needs to be built before the crisis, not during it.
5. It's up to us whether AI becomes a force for good or chaos.
The technology itself is neutral: a deepfake can be used to create personalized education content or to destroy someone's reputation. The difference isn't in the tool, it's in the intention of the user. If you're building with AI, if you're investing in AI, if you're simply using AI, you have a choice.
Build systems that enhance human agency and understanding, or build systems that exploit human psychology and confusion. The future depends on those individual decisions aggregating in the right direction.
Toward a Future of Verified Abundance
We're living through the final moments of the "pre-verification internet."
Soon, saying "I saw it online" will be replaced with "I verified it online." That transition—if we embrace it—may become one of the greatest upgrades to human cooperation in history.
This isn't a question of whether trust will survive the AI era. The question is whether we will design systems that make it stronger than ever. And the answer, if we choose abundance, is a resounding Yes.
Comments