đź”— đź§  #21 Doomscrolling and Social Media

Exploring why we doomscroll

Five resources every week with actionable takeaways to make you a better designer.

Hi folks, we’re back from our summer break for regularly scheduled programming. Didn’t know there was a Links for Thinks summer break? Neither did I — but you gotta let that brain take a break sometimes.

The inspiration for this edition came from some unfortunate circumstances. I was recently bedridden for around a week, and while I’m grateful it wasn’t longer — I did get sucked into a bit of a dark hole that is seemingly more common these days.

That dark hole was a few days of doomscrolling. It’s something that doesn’t seem to be getting any better after we crawled out of our lockdowns a few years ago.

It made me wonder, “why is this happening, and how did we get here?”

So let’s take a look at why this is happening and how we got here so we can collectively figure out how to get out of here.

— Jake

TODAY'S EDITION

Content Topic: Design Patterns | Content Type: Article

THE CONTENT THAT NEVER ENDS

Let’s set the stage with a design pattern introduced circa 2006 that makes modern day doomscrolling possible. The infinite scroll. Dreamed up by Aza Raskin, it’s not inherently a dark pattern, but when it’s coupled with poorly designed intent — we find ourselves falling into rabbit holes that leave our brains drained of dopamine. Yes, he does realize the gravity of the situation.

THE JUICE

When To Be Infinite: Believe it or not there may be some use cases where infinite scroll isn’t awful. If it’s being used to reduce interruptions to a users task, probably a good use case. Used for reducing interaction cost? Probably not so bad. And of course if it makes something easier to find for a user on a mobile device, why not. If it’s being used to keep you sucked into conspiracy theory rabbit holes, probably not so good.

Usability Woes: For every good reason to use infinite scroll, there likely comes two reasons why not, reasons like…

  • Difficulty re-finding content — Making it tough to remember the location of something specific

  • Illusion of completeness — When there isn’t an action to keep going, and it’s unclear that there is more to do

  • Inability to access the end of the page — Sometimes people actually need to get to a footer, don’t make that impossible

  • Accessibility problems — If someone navigates a page in a way that isn’t just scrolling (e.g. keyboard-only or screen readers), infinite scroll adds some complications if not handled in an accessible way

  • Increased page load — If you’re dealing with some spotty connectivity or low bandwidth on a phone — infinite scroll can slow things down

  • Poor SEO performance — Crawlers might have a tough time finding your content

Yes, And: If you find yourself needing an infinite scroll for a good use case, try and supplement the pattern with a landmark or action to add intent. Adding a load more button could be beneficial, or adding integrated pagination to let folks know they’re moving onto a new batch of content.

Content Topic: Algorithm Design | Content Type: Report

CROWDSOURCING THE TRUTH

The second bit of important technology that enables doomscrolling is the recommendation algorithm. Having really found its stride in recent years, let’s take a dive into a case study on one of the most prominent recommendation machines out there. YouTube.

When YouTube won't provide transparency into how its recommendation algorithm works, sometimes you have to get 37,000 people to help you investigate instead. That's exactly what Mozilla did with their RegretsReporter project — the largest-ever crowdsourced study of YouTube's algorithm.

Between July 2020 and May 2021, volunteers from 91 countries flagged 3,362 regrettable videos they encountered on the platform. The findings paint a pretty clear picture: YouTube's algorithm isn't passive — it’s actively steering people toward problematic content.

The Regrets Are Real and Varied: Volunteers reported everything from COVID misinformation to sexualized children's cartoons to election conspiracy theories. The most common categories? Misinformation, violent content, hate speech, and spam. About 12% of reported videos either shouldn't be on YouTube or shouldn't be recommended, based on YouTube's own Community Guidelines.

Recommendations Are the Problem: 71% of all reported regrets came from YouTube's recommendation system, not from searches. And recommended videos were 40% more likely to be regretted than videos people actively searched for. Some of these recommended videos even violated YouTube's own rules and were later removed — but only after racking up millions of views.

Non-English Speakers Get Hit Harder: The regret rate is 60% higher in countries where English isn't the primary language. Brazil, Germany, and France had particularly high rates. Among regrettable videos in non-English languages, 36% were pandemic-related, compared to just 14% in English.

Regrets Perform Well: The reported videos weren't obscure content — they were getting 70% more views per day than other videos volunteers watched. YouTube's algorithm was actively amplifying this stuff, because well, they know it works.

Unrelated Recommendations: In 43% of cases where researchers could see what volunteers watched before a regret, the recommendation was completely unrelated. Someone watches wilderness survival videos and gets recommended extreme right-wing political content. Checks out.

What Needs to Change: Mozilla's recommendations were pretty straightforward — platforms need to enable independent audits of their recommendation systems, publish transparency reports that actually mean something, and give users real control over what gets recommended to them.

The Bigger Picture: YouTube's algorithm drives an estimated 700 million hours of watch time every single day. That's a massive amount of influence with almost zero oversight.

Content Topic: Economics | Content Type: Video

FROM FRIENDS TO FEEDS TO… SHAREHOLDER VALUE

The third and final piece to our doomscrolling storm is technology built to maximize shareholder value.

Instagram’s onboarding once promised to "see photos and videos from your friends." Feels like we’ve drifted a bit away from that. Social media platforms, generally speaking, were built for genuine connection. But we all know at this point they’ve morphed into attention-extraction machines with advertisers being the real customer, not regular folks.

That’s a problem, because we humans have little feeble brains that can be easily manipulated if the right levers are pulled. And boy has modern social media (and media companies at large) been really pulling those levers.

The Ol’ Bait-and-Switch: Web 2.0 gave us something revolutionary — the "follow." You decided who you wanted to hear from, creating your own corner of the internet. But once these platforms went public, the game changed. Connection had to become profit, and the real customer became advertisers hungry for your attention.

The Arms Race Providing Shareholder Value: When platforms compete for attention, they’ll exploit every psychological vulnerability they can find. Infinite scroll, to name one.

TikTok's Algorithm Revolution: While everyone focused on short-form video, TikTok's real innovation was ditching the follow entirely. Your feed became determined by what the algorithm thinks you'll watch, not who you chose to follow. Every scroll is a data point teaching the machine exactly what keeps you hooked.

The Negativity Bias Problem: Algorithms don't care if content makes you calm or angry. They only care if you keep scrolling. Unfortunately, humans naturally stare at negative and outrageous content longer than positive and calm content. Mix infinite scroll + recommendation algorithms + human psychology = we got ourselves that perfect doomscroll storm I was talking about.

The Designing of Less Social Media: During Meta's FTC trial, Zuck admitted only 7% of Instagram time is spent viewing friends' content. Social media has become less social and more about consuming lifestyle aspiration content. It's moved from "what's going on with my friends and how can I connect with them" to "what should I be doing with my life."

Values vs. Attention: There's a gap between what we value and what we pay attention to. We'll scroll through endless content that doesn't bring meaning, joy, or growth to our lives. These systems are so powerful that even people who build them (looking at you, tech workers) have to set hard limits on their own usage (And I usually end up ignoring it anyways).

Finding Balance: We're only two decades into figuring out this massive shift in human communication. The internet is brand new in the grand scheme of things. We have time to course-correct and design technology that adds value to our lives rather than extracting it. The future isn't written yet — and that's actually pretty exciting. So long as we don’t design ourselves into an anxiety ridden future.

This is less about us trying to course correct and buy a farm to live off the grid with our goats. It’s about being more intentional. It's about designing experiences that serve humans, not shareholders. The question isn't whether to use technology, but how to use it in ways that actually better our lives.

TO REGULATE, OR NOT TO REGULATE

It sounds like our doomscrolling journey led us to… well, more gloom and doom. We know that big tech has built machines designed to hold your attention. And we know that the algorithms don't care what keeps you scrolling — puppy videos or conspiracy theories. They only care that you keep consuming. And unfortunately as we know, nothing keeps people engaged quite like rage. But there are still things that can be done to get us to a better spot.

THE JUICE

Social Media is Media: Tech companies claim they're neutral platforms, but every algorithmic recommendation is an editorial decision. When YouTube suggests extreme content or Facebook amplifies angry reactions, those are editorial choices — they just don't want the responsibility that comes with it.

It’s a Business World: These algorithms aren't just reflecting polarization, they're manufacturing it. Companies could change their algorithms tomorrow to prioritize accuracy over engagement. They choose not to because extremism is more profitable than moderation.

Regulate The Machine: Three things could help loosen the grip:

  1. Algorithmic transparency — disclose how recommendations work

  2. Algorithmic accountability — consequences for harmful amplification

  3. Algorithmic choice — let users opt for chronological feeds instead of manipulative curation

Content Topic: Ethics | Content Type: Essay

WELL, NOW WHAT?

If you're designing digital experiences, you're either part of the problem or part of the solution. There's no neutral ground when your design decisions directly impact how people spend their time and attention. The gamification tactics that boost engagement metrics often exploit the same psychological vulnerabilities that make doomscrolling so addictive.

THE JUICE

The Reality Check: Every product team discusses and cares deeply about ethics. But executive leadership might not always partake in the same line of thinking. Designers often imagine themselves as powerful beings shaping products to benefit people, but most are acting on what leadership believes should be done to generate numbers that satisfy investors. Though not always possible to convince folks of your POV, that doesn’t mean you’re off the hook.

Question Everything: When stakeholders ask for features designed to increase engagement, dig deeper. What problem are we actually solving? Who benefits from this design decision — the user or the business? Accept your role as a challenger of the status quo, not just an executor of business requirements.

Put Needs Over Profits: Your role should be to question decisions made for you and to put the needs of the many ahead of the profits of the few. Sometimes the most ethical design is the one that helps people leave faster, even when that conflicts with engagement metrics.

Start With Users: Go talk to actual users. Get to know them, understand their real-life problems, and how digital products fit into their lives. Then start telling people in your company about what you learned, even when they don't really want to hear it. It won't dismantle the system entirely, but it's a start.

THANKS FOR READING—SEE YOU NEXT WEEK

In the meantime, feel free to:

  1. Forward this email or share this link with your friends if you feel they need some links for thinks: https://www.linksforthinks.com/subscribe

  2. Reach out with any suggestions or questions for the newsletter.

  3. Send me some of your favorite links and takeaways.

Cheers, Jake