Apart from being generally chronically online on political X/Bluesky/Tiktok/Instagram, I also study science disinformation for a living, which means I spend a lot of time reading things that aren’t true. More specifically, I spend time reading the corrections, the careful, methodical, often thankless articles that community media organizations publish after a manipulated video has already been seen by millions. I am currently working on a paper examining exactly this, structurally: how the Indian news organization AltNews debunks video disinformation in India, and what the rhetorical structure of that work reveals about the challenge of fighting falsehoods at scale.
India is, according to the World Economic Forum, the country most susceptible to large-scale disinformation in the world, with 750 million internet users, 22 constitutionally recognized languages, a WhatsApp-dominated information ecosystem that is largely invisible to automated detection systems, and a political environment in which viral video disinformation is regularly amplified by mainstream media, fringe actors, and national political parties. In this context, the question of how fact-checkers write, not just what they correct, but how they construct their corrections and the knowledge ecosystems they engage with, warrants deeper examination.
The cognitive trap that makes video disinformation so effective
“Seeing is believing” is a popular folk saying. It describes a genuine and well-documented cognitive bias in that visual evidence carries a persuasive weight that text cannot replicate. My paper draws on Fazio et al.’s (2015) finding that knowledge does not reliably protect against illusory truth; that even informed, attentive readers can be swayed by false information that looks credible. Video disinformation exploits this bias with particular efficiency, because the clip itself functions as apparent evidence. You don’t need a caption to believe what you watched.
But there is a second, less obvious problem. As institutional trust erodes and the awareness of manipulation grows, audiences become susceptible not only to false videos, but to disbelieving real ones. The disinformation ecosystem, at its most corrosive, inserts falsehoods and destabilizes the category of visual truth.
The anatomy of a debunk
My analysis of 150 AltNews video debunking articles reveals a consistent rhetorical structure that departs from the conventions of mainstream journalism. Where standard news follows an “inverted pyramid,” placing most important information first and subsequent information in decreasing order of priority, debunking articles are circular. The headline announces a verdict; the final line confirms it, but this time as a logical conclusion earned through evidence. The piece begins and ends with the same claim, but the reader arrives at the ending differently than they arrived at the beginning.
“Debunking arguments do not show their target beliefs to be false but rather undermine the justification a subject may have for holding them.” — Hanno Sauer (2018)
AltNews is not simply telling readers that a video is fake. It is systematically dismantling the reasons a reader “might have believed it,” such as the political authority of the person who shared it, the apparent plausibility of its imagery, or the emotional register in which it circulated. Particularly relevant to COVID-19 scientific misinformation, each article acknowledges in some way the overwhelming scale of the disinformation [like by alluding to its “virality”], and also the fear and confusion in the social circumstances surrounding the disinformation. The correction is not a verdict delivered from above; it is an argument the reader is invited to construct alongside the fact-checker, building collective capacity to discern.
The lead paragraph of each article enacts this invitation with notable rhetorical precision. It uses passive voice and hedged language: “A video in which a woman is seen lying in the bushes is doing the rounds on social media.” The word “seen” does careful work: it acknowledges what the reader has probably watched, validates their experience, but withholds editorial endorsement of the content. According to rhetoric scholar Kenneth Burke, this is an act of identification: establishing consubstantiality with the audience before introducing dissonance. This identification, a recognition of the reader’s circumstance, allows the reader to be persuaded in the direction of the truth, which is stated once at first in the headline, and repeated at the end.
The pedagogical burden of video verification
What distinguishes video fact-checking from other forms of debunking is the technical weight it carries. To verify a manipulated clip, AltNews staff extract keyframes, run reverse image searches, conduct metadata forensics, and deploy AI-detection tools. Each of these methods must then be explained—plainly, with screenshots, with links to original sources—to a general readership. The articles are, in this sense, simultaneously corrections and tutorials. By modeling the process of finding out, AltNews both corrects the misinformation, and provides technical clarity in how information is produced and distributed, attempting to build readers’ skill in verifying images and videos out there.
This “show your work” norm is a deliberate strategy for building what Dourish and Bellotti (1992) call awareness, through establishing the sense that one understands not just an outcome but the process that produced it. For debunking, transparency about method is the mechanism by which readers are gradually equipped to verify things themselves.
Why human-centered collaboration is the only viable response
AltNews’s staff of journalists, scientists, engineers, OSINT specialists, social activists and people working on intersections of those roles, function as a distributed, multidisciplinary verification network. India’s disinformation ecosystem is, as Starbird, Arif, and Wilson (2019) demonstrate, fundamentally collaborative: coordinated networks of accounts, platforms, and political organizations working in concert to amplify false narratives. Automated platform moderation, such as the content bots of Facebook and X, has proven structurally inadequate to this complexity, particularly given India’s linguistic diversity and the closed architecture of WhatsApp groups.
What is needed, and what AltNews partially models, is a collaboratively-informed approach: human-centered, context-specific, and built for heterogeneity rather than scale. The fact that this work runs on donations, in a country where disinformation reaches hundreds of millions, tells us something important about where the gaps in our collective response still lie.