- Links for Thinks
- Posts
- π π§ Links for Thinks #22 Trust & AI
π π§ Links for Thinks #22 Trust & AI
How much trust is too much trust?

Five resources with actionable takeaways to make you a better designer.
Don't trust AI at all? Trust it a little too much? Regardless of where you sit on the spectrum, chances are it isn't going anywhere soon.
As AI becomes baked into everything from our work tools to our daily conversations, trust has become the critical design challenge. Not blind faith, not total skepticism β but calibrated trust that matches what these systems can actually do.
This week, we're exploring how to design AI experiences that earn and maintain user confidence through transparency, appropriate friction, and honest communication about limitations. Because the most dangerous AI isn't the one that fails β it's the one that sounds so confident you forget to question it.
β Jake
TODAY'S EDITION

TRUST AND AI
Trust isn't something you can just slap onto an AI product like a badge. It's earned through thoughtful design decisions that acknowledge both the power and limitations of these systems. As AI becomes more integrated into our daily tools, understanding how to build and maintain user trust isn't just nice-to-have β it's essential.
THE JUICE
Trust is Multidimensional: Trust in AI isn't a single switchβit's built from:
Reliability (does it work consistently?)
Transparency (can I understand what it's doing?)
Predictability (does it behave as expected?).
Calibrated Trust is the Goal: The sweet spot is where confidence matches actual capability. Over-trust leads to dangerous dependency; under-trust means useful tools go unused. Design for appropriate reliance.
Show Your Work: Explain why the AI made a particular decision, what data it considered, and what it might have missed. Transparency doesn't mean dumping technical details β it means providing the right level of insight for the user's context.
Measure Trust Through Behavior: Track how often users verify AI outputs, whether they return to use the feature, and how much they rely on suggestions versus their own judgment.
Design for Failure: How your AI handles mistakes is as important as how it handles success. Create clear pathways for users to correct errors and provide feedback. Every failure is an opportunity to either build or break trust.

HUMAN-LIKE, BUT NOT HUMAN
We're remarkably bad at knowing when to trust AI. These systems sound so confident and natural that we've lost our usual BS detectors. Every day, millions chat with AI like it's their smartest friend, but this very naturalness creates a dangerous setup: trust works best paired with critical thinking, yet the more we rely on these systems, the worse we get at questioning them.
THE JUICE
The Magic 8 Ball Problem: AI responses aren't intelligent reasoning β they're statistically probable word chains. Ask the same question twice, get different answers. Those convincing responses are well-formed guesses, not verified truth.
Confidence Without Competence: Companies give AI human-like qualities through "thinking" indicators and conversational tones. This removes the uncertainty signals we normally use to detect when something's off. AI sounds equally confident whether it knows the answer or is completely making something up.
Warning Labels Don't Work: After all that effort to make AI feel human and trustworthy, a tiny "can make mistakes" disclaimer buried at the bottom is just plausible deniability β not genuine transparency.
Scaffolding Over Crutches: Current AI systems provide so much support that users become dependent without developing critical thinking skills. What if we designed AI as temporary scaffolding instead? A "learning mode" could surface uncertainty, prompt verification, and gradually build users' judgment rather than replacing it.
Friction as a Feature: Adding verification steps might hurt engagement, but that's the point. Protective friction helps users develop the judgment needed to work safely with AI.

A FIELD GUIDE TO ROBOT PROSE
Wikipedia editors have catalogued thousands of instances of AI-generated text to create a pattern recognition guide. This isn't about banning certain words β it's a field guide to help detect the deeper problems that surface-level AI tells point to.
THE JUICE
Regression to the Mean: LLMs use statistical algorithms to guess what should come next, which means outputs tend toward the most statistically likely result. Everything sounds safe, generic, and eerily similar β because it's optimized for the widest variety of cases, not accuracy or originality.
Puffed-Up Importance: AI constantly reminds you that subjects "represent or contribute to a broader topic" using a small repertoire of phrases. Things always "stand as a symbol" of something or carry "enhanced significance."
The Neutrality Problem: LLMs struggle with neutral tone, especially for "cultural heritage" topics. They'll describe anything cultural with promotional language that sounds more like travel brochures than encyclopedias.
The Formula Problem: AI loves rigid structures. Articles end with "Despite challenges..." followed by hopeful future prospects. The "rule of three" appears everywhere. Transitions rely on a tiny set of phrases like "in summary" or "overall" or the classic "it's not only X, but also Y."
American English by Default: A mismatch often appears between user location, topic origin, and English variety used. An Indian writer covering an Indian university wouldn't use American English β but LLM outputs default to American English unless prompted otherwise.
Don't Trust Detection Tools: AI content detectors like GPTZero perform better than random chance, but they have significant error rates and can't replace human judgment.

DESIGNING FOR CONFIDENCE
Trust builders give users confidence that AI results are ethical, accurate, and trustworthy. These patterns address the fundamental question: how do we design AI experiences that earn and maintain user trust through intentional design decisions?
THE JUICE
Consent in Recording: Being intentional about consent for data sharing builds trust and supports ethical experiences. Limitless disabled default recording of others' voices until audible consent is registered, while Granola initially launched without disclosure. The industry is converging around opt-in disclosures.
Voice Cloning Consent: It's increasingly easy to create fake voices and avatars from recordings. ElevenLabs requested consent from celebrity estates like Burt Reynolds to clone their voices β establishing a pattern similar to laws preventing unauthorized image endorsements.
Incognito Mode: Only ChatGPT has integrated private browsing mode for AI, while Google explicitly disallows AI search features in incognito mode. Meta offers a middle ground β users can't prevent Meta AI from learning preferences, but can mute messages temporarily or indefinitely.
Memory Management: The more we interact with AI, the better it knows us. Three memory patterns emerge:
Knowledge maps expose what data the AI has captured
Selective memory lets users edit retained data
Clear the cache allows full reset without forcing account deletion.
Watermarks for Differentiation: As AI becomes more prolific, differentiating generated content from human-created content protects everyone. Four watermark types exist:
Overlay (visual symbols, easily removed)
Steganographic (imperceptible patterns)
Machine learning (distinct keys readable only by other models)
Statistical (random patterns, hardest to crack).
The Regulatory Patchwork: China requires source-generated watermarks plus metadata. The EU's AI Act imposes labeling standards. The US established watermarking standards via executive order but needs congressional action for enforcement. There's no conventional approach yet β it'll likely require regulation and source code combined.

BEYOND THE CHATBOT
We're so obsessed with being AI-first that we've forgotten the good old things people already know and understand. That magical ChatGPT text box? It has a high interaction costβpeople are terrible at articulating intent, we have to wait 20-40 seconds for responses, and then AI keeps forgetting things. Why should everyone learn prompt engineering? Shouldn't AI understand us better instead?
THE JUICE
Reduce Prompting, Increase Context: Before anyone sends a prompt, make it so succinct, accurate, and contextual that generic responses are minimized. Slow people down with structured inputs β buttons, sliders, checkboxes. Tools like Perplexity let you add clarifications while AI is working. Consensus offers filters for date ranges, citations, methodology β infinitely more useful than "ask me anything."
AI Should Ask Questions: Give AI a prompt that says "ask me a bunch of questions and then we'll make a thing together" β yet most people outside tech don't know that's possible. Design affordances to help people understand AI can be conversational, not just transactional.
Bring Back Familiar Controls: Consensus uses filters really smart β last five years, minimum citations, journal rank, methodology. It shows result distribution (95% of papers say yes, 5% say possibly) rather than just one answer. These are simple UI patterns we already love, applied to AI.
Fix the Refinement Journey: People copy AI output to text editors, cherry-pick paragraphs, bring it back to AI to restructure β creating a dedicated workspace elsewhere is a bad sign. Perplexity's edit mode lets you convert responses to pages with table of contents, then use context menus to extend or remove sections.
Format Controls Should Be Standard: Instead of typing "please write in list format," give users view toggles β list, table, compact. This should be default in every AI experience.
Leverage Loading States: While AI is processing, that's the perfect time for context extraction. Gamma prompts theming controls during generation β people stop caring about loading time when they have something to do.
Show Your Scope: Build trust by showing where answers come from. Display "for this query, we considered 279 pages from experts with 20 years experience" rather than just generic sources.
AI Second, Not AI First: Take the existing user journey, then sprinkle AI across it to reduce frustrations or speed up successes. Dovetail does this beautifully β no sparkles, just AI quietly helping. Research shows people aren't looking for AI features β they're looking for features that work.
The Future Is Orchestration: We're moving from tactical to strategic work. AI won't replace people who create emotional connections with products through attention to detail. What we want are wonderful human-first experiences that happen to have AI components β not AI-first experiences with human afterthoughts.
THANKS FOR READINGβSEE YOU NEXT WEEK
In the meantime, feel free to:
Forward this email or share this link with your friends if you feel they need some links for thinks: https://www.linksforthinks.com/subscribe
Reach out with any suggestions or questions for the newsletter.
Send me some of your favorite links and takeaways.
P.S. I used AI to help write this edition. Trust me any more or any less? Trust AI any more or any less? Who knows, we're all figuring this out togetherβ¦
Cheers, Jake




