đź”— đź§  #15: Understanding the State of AI

Getting a grasp on AI through LLMs, designing around hallucinations, emerging UX patterns, the economics of it all, and the history of the big players

Five resources every week with actionable takeaways to make you a better designer.

Between the hype and the headlines of AI, it can feel like we're all just nodding along pretending to understand what's actually happening.

This week, we're pulling back the curtain on AI—from how it actually works (spoiler: it's not tiny robots typing really fast) to how it's reshaping our interfaces, our work, and maybe even our future.

Because whether you're excited about our new AI overlords or still skeptical about the whole thing, understanding what's happening behind the scenes might just help us navigate what's ahead.

— Jake

TODAY'S EDITION

Content Topic: AI | Content Type: Video

A PEEK BEHIND AI’S MAGIC CURTAIN

By now you’ve likely at least heard the term LLM (Large Language Model). It’s the transformative tech behind what made all our AI chatbot buddies become what they are today. But what does it actually mean? What’s going on behind the scenes? Is it just Frank Morgan pulling levers when we ask questions? Here’s a great primer on what’s happening if you still don’t quite get it.

THE JUICE

Word by Word Magic: At its core, an LLM is actually p simple—it's a sophisticated system that predicts what word should come next in any given text. Think of it like an incredibly advanced autocomplete that can handle complex contexts and generate human-like responses.

Numbers Behind the Words: Every single word gets translated into a list of numbers that capture its meaning. These numbers can shift based on context—just like how the word "bank" means something different in "river bank" and "bank account."

The Attention Game: What makes modern LLMs special is their ability to process text all at once using "attention"—letting all words in a passage interact with each other simultaneously to refine their meaning based on context. This parallel processing is what enables them to understand and generate nuanced responses.

Scale Matters: The training process involves mind-boggling amounts of data—it would take a human over 2,600 years to read what GPT-3 was trained on, reading 24/7. This massive scale is what enables these models to develop such broad capabilities.

Two-Stage Learning: LLMs go through two crucial training phases:

  • Pre-training: Learning general language patterns from internet text

  • Reinforcement Learning: Fine-tuning for being helpful and avoiding problematic outputs

The Magic of Randomness: Instead of always picking the most obvious next word, LLMs are allowed to occasionally choose unexpected options—like how humans don't always use the most common words. This is why asking the same question twice often gives you two slightly different answers.

Behind the Curtain: While we can design the framework for how LLMs work, their specific behaviors come from training on massive datasets. This makes it tough to understand exactly why they make certain predictions—sorta like trying to understand why a human might choose certain words over others.

Human Touch Required: Despite all this sophistication, these models still rely on human feedback and correction to become useful assistants rather than just text prediction engines.

Content Topic: Design Inspiration | Content Type: Website

THE UX OF AI

One of the most exciting things (IMO) about new technologies is seeing all the different ways people might interact with them. Not often do we get to witness complete paradigm shifts in our everyday experiences. This website is a nifty collection of UX patterns emerging from all this AI hubbub. And we're only beginning to scratch the surface of what's possible. I think the combination of AI and spatial computing (i.e. getting away from traditional screens and phones) is going to open up an insane world of possibilities.

THE JUICE

Start with Human Needs: Instead of bending humans to fit technology, shape AI interactions around proven frameworks of human behavior. Look for ways to make advanced capabilities feel familiar, like how chat interfaces made complex language models feel more approachable.

Bridge the Adaptation Gap: Recognize that what's possible technologically may not be what's comfortable behaviorally. Design gradual onramps that help folks build trust and understanding with AI features over time, rather than overwhelming them with ✨ magical capability. ✨ 

Content Topic: AI | Content Type: Article

THE SNOZZBERRIES TASTE LIKE SNOZZBERRIES

AI systems are surprisingly good at sounding confident while being completely wrong. A recent study found ChatGPT falsely attributed 76% of 200 quotes it was asked to identify, and specialized legal AI tools get things wrong in at least 1 out of 6 queries. For designers building AI into products, this creates a unique challenge: how do we create interfaces that maintain trust while acknowledging the imperfection?

THE JUICE

Elegant Uncertainty: Instead of hiding AI's limitations behind generic disclaimers, try using confidence ratings near outputs. For high-stakes decisions, use explicit indicators like "68% confidence" or simpler high/medium/low confidence badges.

Build Trust Through Transparency: Show users the factors that influenced AI's decision when possible. Create expandable sections that reveal competing answers or confidence scores. The goal isn't to overwhelm users but to make verification feel natural and accessible.

Make Verification Seamless: If your AI cites sources, don't just list them—make them easily accessible. Create hover states that preview sources or side panels that show referenced content without breaking the user's flow. Good source design encourages verification without demanding it.

Context Over Warnings: Rather than relying on generic warning labels that folks probably ignore, design contextual indicators that appear when uncertainty is high. Use visual cues like color coding or icons to signal when extra verification might be needed, making it part of the natural interaction rather than an afterthought.

Content Topic: Economics | Content Type: Website

THE FUTURE OF WORK

After analyzing millions of conversations with their AI assistant Claude (which I love btw), Anthropic shares first-of-its-kind data on how AI is actually being used across different jobs and tasks in the real world. Keep an eye on this to see how any of these findings might shift over time.

THE JUICE

Where The Action Is:

  • Tech leads the pack—37.2% of AI use is in computer/math jobs (tasks like coding, debugging, and network troubleshooting)

  • Creative fields follow at 10.3% (mostly writing and editing)

  • Physical labor jobs barely register (farming, fishing, and forestry at 0.1%)

The New Work Reality:

  • About 36% of jobs use AI for at least a quarter of their tasks

  • Only 4% of jobs heavily use AI (75%+ of tasks)

  • Mid-to-high salary jobs use AI most (devs, data scientists)

  • Both very low and very high-wage jobs use AI least

  • AI tends to augment (57%) rather than automate (43%) work

Zooming Out: This isn't about AI replacing entire jobs—it's transforming specific tasks within jobs. The real story is how AI is becoming a collaborator rather than a replacement, especially in technical and creative fields where it bolsters human capabilities instead of taking over completely

Content Topic: History | Content Type: Book

AN AI ARMS RACE

While we're all busy fiddling with ChatGPT and trying to figure out if AI is going to take our jobs, there's actually a fascinating story behind how we got here. The two biggest players, OpenAI and DeepMind, have been racing to create artificial general intelligence (AGI)—but with very different approaches and philosophies. Inevitably big tech comes into the picture when money is needed, muddying the noble cause of creating a technology to better humanity. The classic case of compromising mission to satisfy shareholders.

THE JUICE

Tale of Two Visions: Both OpenAI and DeepMind started with the same vision—developing AGI to benefit humanity. But like any good story, they took different paths to get there, shaped by their founders' distinct philosophies.

Altruism to Acquisition: While both organizations began with altruistic missions, their integration into larger tech companies (Microsoft/OpenAI and Google/DeepMind) created tension between profit and purpose.

The Ethics Equation: As these companies push the boundaries of what's possible, they're grappling with major ethical questions about bias, privacy, and power concentration. When you're building something this influential, it changes the meaning of "move fast and break things."

Power Play: The development of AI has created an unprecedented concentration of power within tech companies. This raises important questions about who gets to make decisions that could affect all of humanity.

Balancing Act: The future of AI development isn't just about making smarter systems—it's about finding the right balance between pushing technological boundaries and ensuring responsible development.

Get a copy of Supremacy*

*This is a product recommendation and affiliate link. When you buy through this link, I may earn a commission.

THANKS FOR READING—SEE YOU NEXT WEEK

In the meantime:

  1. Forward this email or share this link with your friends if you feel they need some links for thinks: https://www.linksforthinks.com/subscribe

  2. Reach out with any suggestions or questions for the newsletter.

  3. Send me some of your favorite links and takeaways.

Cheers, Jake