← Home
The Curation Crisis of the Early 21st Century
AI, engagement, populism, and the collapse of judgement
December 2025
It has become quietly difficult to find good things.
Not good in the sense of fashionable or viral, but good in the older sense: careful, thoughtful, well-made. Good writing. Good arguments. Good art. Good explanations. The kind of work that rewards attention rather than merely capturing it.
Many of us feel this daily, even if we struggle to articulate it. We search for an article and find a dozen near-identical summaries. We look for insight and instead wade through confident noise. We sense that something has shifted, but the shift is subtle enough to evade easy blame.
This essay is not an argument against artificial intelligence, nor against democracy, nor against the internet. It is an attempt to describe a structural problem that has been building across the first quarter of the twenty-first century — one that generative AI has accelerated — and to suggest that its consequences reach beyond culture and technology into politics, economics, and public life.
The problem is not that we are producing too much information.
It is that we no longer know how to curate it.
When friction disappeared
For most of human history, creation was expensive.
Writing required literacy, time, and materials. Publishing required printers, editors, and distribution. Broadcasting required licences and capital. Even the early internet retained a kind of friction: learning to code a website, maintaining a blog, building an audience slowly.
Friction was not merely an inconvenience.
It acted as a filter.
Across the 2000s and 2010s, that friction steadily declined. Digital distribution collapsed costs. Social platforms removed barriers to publication. By the early 2020s, generative models reduced the marginal cost of producing plausible text, images, and music to near zero.
This is an extraordinary technical achievement. But it carries an overlooked consequence: when creation becomes cheap, the bottleneck moves elsewhere.
It moves to attention.
It moves to trust.
It moves to judgement.
It would be disingenuous not to note a small irony here. This essay itself was written with the assistance of generative tools — tools that make drafting, revising, and synthesising easier than they have ever been. I am conscious that any critique of abundance risks being filed alongside the very work it questions. That tension is not accidental. It is the point. The problem is not that these tools exist or are used, but that we have not rebuilt the systems of judgement that allow good work — human, assisted, or otherwise — to be distinguished from the merely plentiful.
When engagement replaced judgement
As platforms scaled through the late 2000s and early 2010s, they faced a genuine problem: how do you decide what to show whom, at global scale and in real time?
The solution was understandable. Engagement is measurable. Clicks, shares, watch-time, comments, reactions — these are legible signals. They can be optimised, A/B tested, and refined continuously. Editorial judgement, by contrast, is expensive, subjective, and slow.
So engagement became a proxy for value.
The shift was gradual rather than sudden. In the early 2010s, Facebook moved from largely chronological feeds to algorithmic ranking, prioritising content that generated interaction rather than content selected by editors or curators [1]. This dramatically increased time-on-platform, but it also made distribution logic opaque.
As these systems grew, platforms began to weaken visible negative feedback — often for defensible, local reasons. Reddit, originally built around transparent upvotes and downvotes, progressively blurred exact vote totals to reduce brigading and harassment, diminishing the visibility of collective judgement at scale [2].
In 2019, Instagram began hiding public like counts, framing the change as a mental-health intervention [3]. While well-intentioned, the effect was to move evaluation from shared signals to private engagement metrics.
In 2021, YouTube removed public dislike counts, citing creator wellbeing and targeted harassment [4]. Internal signals remained, but a crowd-sourced quality indicator disappeared from the public interface.
Individually, each decision made sense. Collectively, they produced a one-sided feedback system: positive engagement increased reach; lack of engagement simply resulted in obscurity. The system did not ask whether something was accurate, careful, or original. It asked whether it travelled.
In such systems, confidence beats correctness, repetition beats originality, and emotional charge beats nuance. These are not moral failings of users; they are predictable outcomes of the optimisation target — and generative systems are exceptionally good at exploiting them.
Retrenchment, cost, and liability
The retreat from judgement did not begin with social media.
In the early commercial internet of the 2000s, infrastructure providers established a defining norm: they were conduits, not publishers. Hosting companies argued that they could not be responsible for content at scale, both for legal and economic reasons [5]. This distinction enabled rapid growth — and embedded a powerful incentive to distance oneself from judgement.
As social platforms expanded through the 2010s, they initially attempted to balance scale with oversight. Moderation teams grew. Community standards expanded. Editorial intervention was framed as necessary for safety and trust.
By the mid-2010s, the costs became visible. Human moderation did not scale linearly with content volume, and the psychological toll on moderators was widely documented [6]. At the same time, platforms faced asymmetric political risk: any intervention provoked accusations of bias, while non-intervention often passed unnoticed.
By the late 2010s, internal research at several platforms showed that engagement-driven ranking amplified outrage and polarisation — yet altering these systems risked user growth and revenue [7]. Judgement was costly; engagement was safe.
The early 2020s marked a turning point. Cost-cutting became a strategic priority across the technology sector. Following the 2022 acquisition of Twitter, large portions of its trust-and-safety organisation were dismantled, and content policies were repeatedly revised [8]. Elsewhere, moderation narrowed in scope rather than expanded.
Infrastructure providers articulated the logic openly. Companies such as Cloudflare argued that acting as arbiters of online speech was neither legitimate nor sustainable for neutral service providers [9].
This was not indifference. It was rational behaviour under competitive, legal, and political pressure.
The structural consequence was clear: as judgement retreated, engagement became the default sorting mechanism — not because it was better, but because it was cheaper, safer, and measurable.
From cultural slop to political slop
The political consequences of this shift emerged gradually across the 2010s.
As platforms matured, political communication adapted to their dynamics. Messages that travelled furthest were not those that survived careful scrutiny, but those that provoked reaction. Complexity, caveats, and institutional language performed poorly; simplicity, confidence, and emotional charge performed well.
In the United States, Donald Trump’s 2016 campaign demonstrated how saturation coverage and platform-native rhetoric could dominate attention while bypassing traditional expert mediation [10]. Dismissals of “so-called experts” aligned neatly with an environment in which expertise was poorly signalled.
In the United Kingdom, the 2016 Brexit referendum followed a similar pattern. The claim that leaving the EU would free £350 million per week for the NHS was repeatedly challenged by economists and civil servants, yet persisted precisely because correction did not impede its spread [11].
These dynamics diffused. In Brazil, Jair Bolsonaro rose to power in 2018 while openly rejecting environmental and public-health expertise — a stance that intensified during COVID-19 [12]. In Hungary, Viktor Orbán framed independent media, universities, and experts as hostile elites throughout the late 2010s [13].
The common thread is not error, but indifference to correction. Institutional rebuttal no longer functioned as a braking mechanism because the channels that once enforced epistemic discipline had eroded.
Democracy, filtered through engagement-driven systems, increasingly resembled the feed.
Why this moment is different
Information overload is not new. The printing press, newspapers, and the early web were all accused of overwhelming readers.
The present moment differs in three ways.
First, generative systems produce plausibility at scale. Fluency, confidence, and structure — once costly signals of expertise — are now abundant.
Second, provenance has collapsed. Style no longer implies authorship. Authority can be convincingly imitated. Even academic systems have begun ingesting hallucinated citations [14].
Third, feedback loops have become recursive. Models are increasingly trained on synthetic output. Errors propagate. Mediocrity reinforces itself. The median quality declines even as the best work still exists — harder to find, less well signalled.
Earlier information revolutions increased volume.
This one collapses provenance.
Concentration, not democratisation
Lowering the barriers to creation was long expected to democratise success. It did not.
From the early 2000s onward, social and economic inequality continued to rise, driven by globalisation and digital scale. Engagement-driven discovery systems did not counteract this trend; they reinforced it.
In music, streaming dramatically expanded access while concentrating income. By the early 2020s, industry analyses showed that the top 1% of artists captured roughly 85–90% of streaming revenue, up from an estimated 60–65% in the CD era [15]. The middle hollowed out.
At the macro level, the pattern is identical. In the United States, the share of total wealth held by the top 1% rose from roughly 30% in 2000 to around 36–37% by 2024 [16].
Equality of access did not produce equality of outcome. In engagement-optimised environments, abundance increases competition faster than opportunity. Attention compounds like capital.
Creation was democratised.
Success was further concentrated.
This is why curation matters. Without mechanisms that slow feedback, privilege durability over virality, and allow judgement to accumulate, abundance produces pyramids rather than ecosystems.
Choosing friction — without romanticising it
Global platforms optimised for convenience, price, and scale are often extremely good at what they do. The problem is not convenience itself. It is forgetting that convenience always encodes a trade-off.
What the last two decades have shown — repeatedly and empirically — is that high-quality curation requires some form of friction: pacing, contribution costs, visible judgement, or delayed reward. This friction need not be elitist. In the best cases, it is simply the cost of taking ideas seriously.
Sites like Stack Overflow have demonstrated, at enormous scale, that expertise can be surfaced without central authority. Contributors do not self-elect as experts; credibility is earned through peer review, visible correction, and accumulated track record. Answers rise because they withstand scrutiny, not because they provoke engagement.
Similarly, LessWrong has shown that long-form, high-signal discourse can persist online when norms are explicit and judgement is layered. Disagreement is expected. Revision is normal. The result is not consensus, but epistemic progress.
Wikipedia remains one of the most successful knowledge projects in history precisely because it combines openness with rigorous process: citation requirements, edit histories, talk pages, and role-based moderation. Anyone may contribute; not everything is accepted.
These systems are not perfect, and many are now under pressure from generative AI. But they prove something important: curation does not require technocracy. It requires clear norms, earned credibility, and friction that serves a purpose.
Defending judgement as infrastructure
Rebuilding curation ultimately means defending the institutions that absorb complexity on society’s behalf.
Trust in bodies such as the CDC and NIH declined during COVID-19 — not because uncertainty vanished, but because uncertainty was mistaken for incompetence. Universities, civil services, and statistical offices have similarly been reframed as ideological actors rather than procedural ones.
High-quality judgement is expensive. It requires skilled labour, time, and tolerance for disagreement. Smaller institutions feel cheaper in the short term, but externalise costs that return later — in poor policy, avoidable crises, and reactive governance.
What this asks of individuals is modest but real: subscribing rather than scrolling, tolerating slowness, returning to sources that correct themselves, and supporting institutions even when they deliver uncomfortable answers.
This is not nostalgia.
It is maintenance.
And maintenance, unlike virality, only reveals its value when it is missing.
References
- Facebook, News Feed FYI, 2013
- Reddit Admins, Vote Fuzzing Explained, 2015
- Instagram, Hiding Like Counts, 2019
- YouTube, Update to Dislike Counts, 2021
- Electronic Frontier Foundation, Intermediary Liability, 2004
- Newton, The Trauma Floor, The Verge, 2019
- Facebook internal research reporting, 2021
- Reporting on Twitter Trust & Safety layoffs, 2022
- Cloudflare Blog, Why We Terminated Daily Stormer, 2017
- US election media analyses, 2016
- UK Statistics Authority rulings, 2016–2017
- WHO & Brazilian public-health reporting, 2020
- EU media-freedom reports, 2015–2020
- Academic publishing analyses of AI-generated citations, 2023
- IFPI streaming-revenue analyses, 2000s–2020s
- Federal Reserve wealth-distribution data, 2000–2024