Trust as Leverage
One of the less-discussed consequences of AI proliferation is a growing trust deficit. When AI can generate convincing text, voices, faces, and video, the baseline question of “is this real?” becomes much harder to answer. Deepfakes are already a problem. AI-generated hype is already everywhere. And people are already starting to feel it: the creeping uncertainty about who is actually who, what is actually real, and who is actually looking out for them versus just optimizing for conversion.
In that environment, trust becomes a scarce resource. And scarce resources have leverage. The person in your industry who is genuinely trustworthy, who has a track record, who is known by name, who takes accountability seriously, that person has an asset that no amount of AI-generated content can manufacture. You cannot deepfake a decade of honest dealing with real clients in a real community.
The practical implication is to double down on your trust-building activities right now, not coast on whatever reputation you have. Be more transparent, not less. Be more explicit about your process, your judgment calls, and your reasoning. Let clients see how you think. In a world where AI can produce outputs that look like expertise, demonstrating real expertise becomes more important, not less. The visible decision-making process is part of what you are selling.
My friend David Roy Newby gave me a phrase that I think nails the mechanism here: “proof of soul.” In a world where AI can generate anything, soulful inefficiencies serve as trust signals. When someone nerds out about something irrelevant to their pitch, when they share something deeply personal that no optimization algorithm would recommend, when they do something that makes no business sense but makes perfect human sense: those moments are proof of soul. They are evidence that a real person is behind the work. You cannot deepfake genuine passion about a niche topic. You cannot algorithmically generate the kind of specificity that comes from a person who has actually lived something. In the AI age, these moments of unoptimized humanity become your most powerful trust signal.
This connects to something Russ Ballard and I discussed after my SXSW talk. Vulnerability builds trust quickly. When you are transparent about what you do not know, you give people permission to learn without feeling behind. You create safe space for courage, boldness, and imagination. That transparency is itself a proof of soul, because no optimization algorithm would ever tell you to admit uncertainty. Being willing to say “I am still figuring this out” in front of an audience is the kind of move that only a real person makes, and people recognize it immediately.
This also means being explicit about your values and your commitments in ways that generic AI providers cannot be. You have a name on the line. You are accountable to real people in a real community. That accountability is not a burden. It is your competitive moat. Being a trusted person offering an essential service during a time of genuine uncertainty and change is one of the strongest positions you can be in right now.
Key Takeaway
Trust is becoming more valuable as AI makes it harder to know who to believe; double down on your accountability and transparency rather than coasting on past reputation.