When Human Bias Meets Machine-Scale Amplification

Our biases and habits — especially confirmation bias and the pull of convenience — become risk multipliers when amplified by AI systems designed for speed, fluency and engagement.

March 5, 2026
Walther, Cornelia - Mis- and Disinformation in a Hybrid Era
Misinformation and disinformation have existed for as long as humans have told stories. (Science Photo Library/REUTERS)

We no longer inhabit an information environment shaped primarily by human exchange. This new era is defined by a hybrid system in which natural intelligence (NI) and artificial intelligence (AI) continuously interact — sometimes productively, often problematically — to shape what we see, believe and share. In this environment, misinformation and disinformation are not anomalies. They are emergent properties of a system where human cognitive limits meet machine-scale amplification.

Misinformation and disinformation have existed for as long as humans have told stories. What has changed is not intent but intensity: speed, reach, personalization and persistence. The deeper causes for these lie less in technology itself and more in the interaction between how humans think and how AI systems optimize.

Hybrid Intelligence and the Structure of Human Sense-Making

NI unfolds across four interconnected dimensions at the individual level — aspirations, emotions, thoughts and sensations — and across collective levels where individuals are embedded in communities, countries and the planet. AI, by contrast, operates through pattern recognition, probabilistic inference and optimization toward selected objectives such as relevance, efficiency or engagement.

In a hybrid system in which AI, NI, machines and humans interact and co-evolve, holistic thinking becomes essential. Human-generated data serves to train AI models; AI-generated outputs shape human perception, emotion and judgment — and this feedback loop gradually reshapes norms and behaviours. That brings risks. AI is not “smarter” than humans, but because it systematically exploits predictable features of human cognition, it opens fertile ground for large-scale misinformation and disinformation.

Human cognition evolved for survival under conditions of scarcity and immediacy — not for navigating global information flows. We are meaning-makers, not neutral processors of evidence. That makes us efficient but also vulnerable when exposed to systems that reward speed, certainty and emotional resonance.

Cognitive Shortcuts as Structural Vulnerabilities

One of the most robust findings in cognitive science is that humans default to mental shortcuts when faced with complexity. We conserve cognitive energy by relying on heuristics rather than sustained deliberation. In everyday life, this is adaptive. In AI-mediated environments, from social media to shopping, it becomes a liability.

Confirmation bias is defined as the tendency to favour information that aligns with existing beliefs and shapes what we notice, remember and share. On its own, confirmation bias is manageable. However, when it’s paired with recommendation systems that learn what keeps users engaged, it becomes a powerful amplifier.

A practical example: consider a user searching for information about a contested public issue, such as vaccination or migration. If early clicks signal interest and emotional engagement, algorithms respond by offering increasingly similar content — often more extreme or emotionally charged, because it sustains attention. Over time, the user’s information environment narrows, not because they actively reject alternative views, but because the system quietly optimizes away any friction.

Empirical evidence shows that false information spreads faster and farther online than accurate information, largely because it evokes stronger emotional reactions. AI does not create this preference, but it does facilitate and operationalize it at scale.

Emotion, Identity and a Narrative Pull

In other words, misinformation and disinformation thrive because they feel true. They resonate with aspirations, validate emotions and reinforce identity. In moments of uncertainty, humans gravitate toward narratives that simplify complexity, assign blame or restore a sense of control.

AI systems trained on engagement signals learn this logic quickly. They surface content that aligns with users’ perceived values and fears, creating informational environments that feel coherent while also becoming increasingly detached from shared reality.

A concrete illustration can be found in crisis situations. During public health emergencies, emotionally framed claims, whether alarmist or dismissive, often circulate more widely than careful scientific updates. Information overload, mixing accurate and false claims, makes it harder for individuals to identify reliable guidance, weakening trust and decision making. An infodemic — defined by the World Health Organization as too much information, including false or misleading, in digital and physical environments — causes confusion and risk-taking behaviours that can harm health. Examples abound, such as, most recently, misleading information linking Tylenol to autism in children if taken by their mothers during pregnancy, but also persistent denials of climate change and the impact of fossil fuels on it.

At the individual level, emotions drive attention. At the community level, shared narratives harden group boundaries. At the national level, misinformation shapes policy debates and electoral outcomes. At the planetary level, distorted narratives delay collective responses to long-term risks such as climate change.

Algorithmic curation can intensify these dynamics even without explicit intent to polarize. The result is narrative gravity: once people fall into tightly reinforcing belief ecosystems, correction feels less like clarification and more like a threat.

Convenience as a Catalyst

Another underappreciated driver of misinformation and disinformation is habit. Humans are drawn to the path of least resistance. AI systems that summarize, autocomplete, recommend or generate content reduce cognitive effort — and with it, critical cognitive engagement.

Language models can produce fluent, confident explanations that sound authoritative even when they are incomplete or wrong. Research shows that people tend to overtrust confident AI outputs, especially when they lack domain expertise.

A practical example: professionals using AI tools to draft briefings or summaries may skip verification because the output appears coherent, and time pressure rewards speed. Over time, this can normalize the circulation of subtly inaccurate information — not out of malice but convenience. Sensations of ease replace signals of epistemic caution.

This effect compounds in organizational settings. When AI-generated summaries become standard inputs for meetings or decisions, errors propagate silently. Responsibility becomes diffused: no single person feels accountable for checking what “the system” produced.

Aspirations Quietly Reshaped

Misinformation and disinformation distort facts, but even worse, they influence aspirations and gradually erode even deeply held values and perceptions. Repeated exposure to curated narratives subtly shifts what people perceive as normal, desirable or inevitable. Our perspective shifts without a deliberate conscious decision to change.

For example, constant exposure to polarized or cynical content can normalize distrust and disengagement. AI systems, trained on what captures attention, may overrepresent conflict and outrage because these emotions drive interaction. Over time, this reshapes collective expectations about politics, institutions and even human nature.

At the planetary level, this matters in a multidimensional manner. When long-term challenges are consistently framed as unsolvable, exaggerated or conspiratorial, motivation for collective action erodes — even when scientific consensus is strong — and individual agency dissolves as the feeling of helplessness is compounded by the ease of accepting the prevailing narrative.

From Individual Bias to Collective Fragility

Vice versa, what begins as a cognitive shortcut at the individual level can scale into systemic risk. When millions of people interact with AI systems that reinforce bias, reward emotional reactivity and obscure uncertainty, collective sense-making degrades.

Such erosion has measurable consequences. Polluted information environments undermine trust in institutions, weaken social cohesion and impair crisis response. Analyses of contemporary information ecosystems show that disinformation reduces societies’ capacity to coordinate under stress and to maintain shared factual baselines.

The opportunity before us is to intentionally design hybrid intelligence systems that reinforce and elevate natural cognitive capacities. When thoughtfully configured, human-AI interaction can act as a stabilizing force, helping people recognize bias, slow judgment and expand perspective. This requires a double shift in design priorities from attention via engagement to agency and cognitive autonomy, and from frictionless interaction to purposeful pause.

These shifts do not deny limitations. Humans will always bring bias; AI systems will always reflect their data and objectives. The task is not to eliminate these constraints, but rather to orchestrate their interaction so that each compensates for the other’s weaknesses.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Cornelia C. Walther is a senior fellow at CIGI, the Sunway Centre for Planetary Health, the Wharton Neuroscience Initiative/Wharton AI & Analytics Initiative and the Harvard Learning and Innovation Lab, as well as an adjunct associate professor at the School of Dental Medicine at the University of Pennsylvania.