AI Didn’t Start the Fire: How Stack Exchange Moderators and Users Demonstrate Exit, Voice, and Loyalty

Timeline-style diagram showing key Stack Exchange conflict and strike-related events, aligned with community grievances, actions, and EVL interpretations (loyalty, voice, exit).
How historical tensions on Stack Exchange (SE) between the community and platform (SE, Inc.) and strike-related events align with the SE community’s grievances, their actions, and our theoretical interpretations of loyalty, voice, and exit.

Generative AI technologies rely on content from knowledge communities as their training data. However, these communities receive little in return and instead experience increasing moderation burdens imposed by an influx of AI-generated content. Moreover, as platform operators sell their content to AI developers whose products may substitute for their work, these communities see a decrease in web traffic and new content and struggle with maintaining the vibrancy of their knowledge repositories. According to The Pragmatic Engineer, a prominent technology newsletter covering software engineering, the traffic on Stack Overflow declined dramatically to the point that the platform now generates roughly the same amount of new content as it did when it first launched in 2008, mostly driven by the impact of generative AI.

Even before AI technologies posed new threats, relationships between online communities and their host platforms were often uneasy. Past research on platforms such as Reddit, Stack Exchange, Tumblr, and DeviantArt reveals a recurring pattern: when platform policies conflict with community values, communities tend to push back. Community members have organized blackouts, suspended moderation, or migrated to alternative platforms altogether. However, less understood is how these conflicts unfold over time, especially in the context of generative AI. So how do knowledge contributors resist AI-related policies that conflict with their values? And what happens in the aftermath of such collective action, especially for a community’s governance, including how rules are set, whose voices are recognized, and how participation is enabled?

To answer these questions, we examined a major conflict between SE, Inc. and the community that occurred in 2023 around an emergency arising from the release of LLMs. Drawing on a qualitative analysis of over 2,000 messages posted on Meta Stack Exchange (the Stack Exchange site designated for policy discussions), as well as interviews with 14 community members, we traced how this conflict emerged, escalated, and evolved. What we found was not a sudden backlash driven solely by AI, but the accumulation of long-standing grievances.

According to our interviews, SE community members described years of frustration over declining transparency, accountability, and participatory governance. Although the platform historically supported community self-regulation through mechanisms such as moderator elections and shared moderation responsibilities for users with high reputation, community members increasingly perceived that key decisions were being made by SE, Inc. without meaningful community input. Tensions escalated when SE, Inc. introduced policies related to AI-generated content without consulting moderators or contributors, which many interpreted as a long-standing exclusion and disregard. In response, moderators and contributors coordinated collective action by suspending moderation activity, signing public petitions, and updating discussions on Meta Stack Exchange. Some also chose to exit the platform, migrating to alternative spaces such as Codidact, which is an open-source, community-governed platform. The collective action was organized through a tiered communication structure, beginning with a small, enclosed group of moderators and then spreading across the network’s users.

We interpret findings through the lens of Albert O. Hirschman’s Exit, Voice, Loyalty framework. According to Hirschman, members of an organization face two options to express their dissatisfaction when loyalty towards the organization decreases: one is exit, and the other is voice. In the Stack Exchange case, loyalty had already degraded due to the accumulation of unresolved grievances rather than a single triggering event. As community members came to believe that their voices were no longer heard, dissatisfaction manifested in two distinct responses: coordinated collective voice through organized resistance, and exit through permanent disengagement from the platform. This pattern highlights how governance crises can emerge even in platforms that formally support community self-regulation, and how declining loyalty can transform routine disagreement into large-scale collective action or exit.

In retrospect, the Stack Exchange strike highlights a broader lesson: community grievances around AI are not just about technical issues, but about deeper governance issues about relationships between platforms and the communities that sustain them. Thus, managing these crises requires more than better moderation tools or more transparent AI policies. Platforms and big tech companies need to support participatory governance in a more systematic way. For example, creating mechanisms for effective voice by binding platforms into an agreement where community input can help shape decision-making processes. Another possible solution would be credible exit, where contributors have alternatives if governance on the original platforms fails. When communities can leave without their data being locked in, platforms are more likely to listen. Credible exit not only empowers the communities, but also reduces long-term governance risks for platform operators. Conflict is expensive for platforms, and maintaining loyalty requires long-term investment in moderation, communication, and policy enforcement. Conversely, the exit process can function as a self-binding mechanism that mediates platform behavior and mitigates costly disputes when users have functional alternatives. And when platforms bind themselves to community accountability, conflicts are less likely to escalate into strikes in the first place.

In conclusion, the SE moderation strike was not a sudden backlash driven solely by AI, but the accumulation of long-standing grievances. As generative AI continues to reshape the internet, the future of knowledge production will depend not only on what AI can generate, but also on whether volunteer contributors who built our shared knowledge commons are given the right to decide what comes next. We need to institutionalize participatory governance with binding mechanisms and create more credible exit options for communities to sustain this future.

Why do people participate in similar online communities?

Note: We have missed publishing blog posts about academic papers over the past few years. To ensure that my blog contains a more comprehensive record of our published papers and to surface these for folks who missed them, I will be periodically publishing blog posts about some “older” published projects.

It seems natural to think of online communities competing for the time and attention of their participants. Over the last few years, I’ve worked with a team of collaborators—led by Nathan TeBlunthuis—to use mathematical and statistical techniques from ecology to understand these dynamics. What we’ve found surprised us: competition between online communities is rare and typically short-lived.

When we started this research, we figured competition would be most likely among communities discussing similar topics. As a first step, we identified clusters of such communities on Reddit. One surprising thing we noticed in our Reddit data was that many of these communities that used similar language also had very high levels of overlap among their users. This was puzzling: why were the same groups of people talking to each other about the same things in different places? And why don’t they appear to be in competition with each other for their users’ time and activity?

We didn’t know how to answer this question using quantitative methods. As a result, we recruited and interviewed 20 active participants in clusters of highly related subreddits with overlapping user bases (for example, one cluster was focused on vintage audio).

We found that the answer to the puzzle lay in the fact that the people we talked to were looking for three distinct things from the communities they worked in:

  1. The ability to connect to specific information and narrowly scoped discussions.
  2. The ability to socialize with people who are similar to themselves.
  3. Attention from the largest possible audience.

Critically, we also found that these three things represented a “trilemma,” and that no single community can meet all three needs. You might find two of the three in a single community, but you could never have all three.

Figure from “No Community Can Do Everything: Why People Participate in Similar Online Communities” depicts three key benefits that people seek from online communities and how individual communities tend not to optimally provide all three. For example, large communities tend not to afford a tight-knit homophilous community.

The end result is something I recognize in how I engage with online communities on platforms like Reddit. People tend to engage with a portfolio of communities that vary in size, specialization, topical focus, and rules. Compared with any single community, such overlapping systems can provide a wider range of benefits. No community can do everything.


This work was published as a paper at CSCW: TeBlunthuis, Nathan, Charles Kiene, Isabella Brown, Laura (Alia) Levi, Nicole McGinnis, and Benjamin Mako Hill. 2022. “No Community Can Do Everything: Why People Participate in Similar Online Communities.” Proceedings of the ACM on Human-Computer Interaction 6 (CSCW1): 61:1-61:25. https://doi.org/10.1145/3512908.

This work was supported by the National Science Foundation (awards IIS-1908850, IIS-1910202, and GRFP-2016220885). A full list of acknowledgements is in the paper.

This post was first published on Benjamin Mako Hill’s blog copyrighteous.

CDSC at the NCA 2025

CDSC members will be presenting at this years National Communication Association (NCA) Convention in Denver! You are warmly invited to join CDSC members during our talks and other scheduled events. Please come say hi!

Check out group members attending and what research they’ll be sharing:

Dyuti Jha: Dyuti will be presenting her paper titled “Mapping the Digital Life of Caste-Based Hate Speech” on Thursday, November 20th from 11:00 AM to 12:15 PM, discussing the processes of subreddit level moderation of caste-based hate speech in the absence of Reddit’s acknowledgement of Caste as a system of discrimination. The project seeks to understand how moderators in South Asian subreddits such as r/India and r/UnitedStatesofIndia identify caste based hate speech, how, if at all, they moderate it, whether the caste composition of the moderator teams for these subreddits influence the decision making on whether to or how to moderate such speech, and how they do so without any platform level guidelines. This project highlights a severely under researched area of caste in computing, particularly about how issues of identity bolsters or hinders inclusivity of a community.

Maddie Douglas: Maddie will be part of the Digital Rhetorics and Contemporary Media panel (Rhetorical and Communication Theory Division) on Friday, November 21st from 8:00 AM – 9:15 AM presenting a full paper titled “Strategic Ambiguity in the Modern Digital Age: Polysemy, Controversy, and AI Hype in the ‘Pause Giant AI Experiments’ Open Letter.” Maddie’s paper conducts a close reading of an “AI ethics” open letter that was spread shortly after GPT-4’s release, viewing it as a strategy to benefit AI investors and amplify hype. This reading makes a case for redefining “strategic ambiguity” from Leah Ceccarelli’s 1998 definition to include polysemy that achieves criticism (as well as praise) from audiences.

Hsuen-Chi (Hazel) Chiu: Hazel will be presenting her paper “AI Companions and the Illusion of Privacy: When Social Connection Meets Data Exposure” on Thursday, November 20th from 2:30 – 3:45 PM. She’ll discuss how users manage privacy when forming emotional relationships with AI companions. Drawing on interviews with 15 users of Replika and Character.AI, her study shows that people often treat these chatbots like trusted confidants while simultaneously worrying about how companies might use their data. Using Communication Privacy Management theory and the horizontal/vertical privacy framework, she highlights how users negotiate this tension between intimate disclosure and institutional surveillance. Her findings point to the need for more transparent, user-centered privacy design in emotionally supportive AI systems.

Srish Chatterjee: Srish will be at a day-long pre-conference called ‘Conspiratorial Economies,’ where they’ll present a full paper called “The Invisible Hand: Rhetorical Patterns in Conspiracy Theories on Technology’s Ubiquity.” Srish’s paper examines the rhetorical patterns of technological conspiracy theories and how they function as sophisticated folk epistemology that, while often factually inaccurate, articulate legitimate public anxieties about agency, surveillance, and corporate power in complex digital societies.

Welcome new student members of the CDSC!

Most years, the CDSC is lucky enough to recruit some amazing new Ph.D. students to the lab. This fall is no exception and we are thrilled to welcome an extraordinary group across several of our group’s campuses. The students join us from a wide variety of places, backgrounds, and prior affiliations (which should be encouraging for any prospective students looking to join the group in the future!). Some short bios and photos follow below (in alphabetical order by last name) with the text largely taken from the people page on our wiki. You can look forward to reading more about their research in the coming years!

Eric Fassbender is a first year PhD student in the Media Technology and Society program at Northwestern University. He is interested in studying technology adoption as an expression of resistance and protest. He is currently researching the ways that people form decisions to leave online groups around issues of surveillance and political alignment. You can learn more about him on his website here or on mastodon here. Outside of work, Eric loves reading sci-fi, all things cyberpunk, and hiking to improve his landscape photography.

Jonghyun Jee (pronounced Jong-H-yuh-n, not Hyoon) is a first-year PhD student in the Media, Technology, and Society program at Northwestern University. He studies how online communities create and enforce their rules. His research has looked at a range of platforms, from established ones like Wikipedia, YouTube, and Discord to decentralized networks such as Bluesky and Mastodon. Lately, Jonghyun has been exploring how to use LLMs to simulate these social environments at scale. He’s driven by the belief that critiques of technology (even dystopian ones) are less calls for its undoing than invitations to reimagine it. When Jonghyun procrastinates, he practices zen meditation and writes short film synopses.

Manish Kumar is a first-year PhD student at the School of Information at the University of Texas at Austin advised by Dr Nathan TeBlunthuis and Dr Edgar Gómez Cruz. Manish’s work explores political expression on social media and how it connects to people’s offline relationships. He’s fascinated by human experiences at scale, and tries to bridge qualitative inquiry with computational techniques to capture that complexity. Broadly, Manish studies how social media/technology becomes woven into people’s everyday political sensemaking. Manish grew up in Patna, India. He earned a degree in Information Science & Engineering and spent a few years as a software developer, but has always been drawn to the sociological side of technology. That curiosity took Manish to UC Berkeley for a Master’s of Information Management and Systems, where he discovered research and got completely hooked. In his free time, Manish like to do nerdy stuff like reading historical fiction, going on walking tours, learning about local history (that includes the petty neighbourhood rivalries) and going to museums.

Jianghui Li is a first-year PhD student at the University of Texas at Austin’s School of Information, advised by Dr. Nathan TeBlunthuis. He is interested in researching belief dynamics, collective behavior, and sustainability in sociotechnical systems through the lens of complex adaptive systems. Before studying at UT Austin, Jianghui earned bachelor’s and master’s degrees at Syracuse University’s School of Information Studies, and he misses the cool Syracuse weather Outside of research, Jianghui enjoys fishing, learning about fish, and sometimes thinking about the similarities between ecological systems and human networks.

Dylan Smith is a first-year MA/PhD student at University of Washington—Seattle in Communication. They grew up in Portland, Oregon and got a bachelor’s degree in Computer Science at Carleton College in Minnesota. Dylan’s research interests are in online interpersonal communication and online governance. For the past few years, Dylan has been working on a research project studying Wikipedia’s arbitration process. In their free time, Dylan likes reading fiction, spending time with friends, hiking, and long-distance running. Last Spring, Dylan ran their first marathon!!!

Ran Tang is a MA/PhD student in the Department of Communication at the University of Washington. Her research focuses on the moderation of online communities. She primarily use qualitative methods to study the daily work of volunteer moderators, and is also exploring the use of quantitative approaches in future projects. In her free time, Ran enjoys playing table tennis and swimming.

Yiwei Wu is a first-year PhD student at UT Austin. Previously, she attended the University of Washington for her bachelor’s degree. Her research interests include online collective action, peer production, and community data governance. In her free time, Yiwei enjoys baking, playing musical instruments (bass and Chinese flute), and playing farming games (e.g., Stardew Valley).

Science of Community Dialogue: The Impacts of Organizational Interventions in Open Source Software Engineering

This dialogue will take place on November 7th at 12pm CT and will explore how free/libre and open source software (FLOSS) projects adapt their work processes to recruit new contributors and build the project communities that they want, and how FLOSS projects redesign collaboration processes within different environments and moments in project lifecycles. Professor Igor Steinmacher (Northern Arizona University) will be joining Matt Gaughan (Northwestern University) to present recent research on topics including:

  • Documentation practices in FLOSS projects
  • Disconnect between guidelines and reality
  • Sustainability challenges in FOSS communities
  • Community-based governance redevelopment
  • Structural shifts for long-term health

full session description is on our website. Register online.

What is a Dialogue?

The Science of Community Dialogue Series is a series of conversations between researchers, experts, community organizers, and other people who are interested in how communities work, collaborate, and succeed. You can watch this short introduction video with Aaron Shaw.

What is the CDSC?

The Community Data Science Collective (CDSC) is an interdisciplinary research group made of up of faculty and students at the University of Washington Department of Communication, the Northwestern University Department of Communication Studies, the Carleton College Computer Science Department, the School of Information at UT Austin, and the Purdue University School of Communication.

Learn more

If you’d like to learn more or get future updates about the Science of Community Dialogues, please join the low volume announcement list.

Community Dialogue – AI Boundaries: Refusal and Privacy

Join the Community Data Science Collective (CDSC) for our 12th Science of Community Dialogue! This Community Dialogue will take place on October 17th, 2025 at 12:00 pm CT. This dialogue explores how companion chatbots invite deep emotional disclosure while raising concerns about data privacy—and how some communities are pushing back through AI refusal. Professor Jasmine McNealy (University of Florida) will join Hsuen-Chi (Hazel) Chiu (Purdue University) to present recent research on topics including:

  • Emotional disclosure in chatbot interactions
  • Data privacy and AI refusal
  • Designing emotionally intelligent, boundary-aware AI
  • Cultural implications of opting out of AI companionship

full session descriptions is on our website. Register online.

What is a Dialogue?

The Science of Community Dialogue Series is a series of conversations between researchers, experts, community organizers, and other people who are interested in how communities work, collaborate, and succeed. You can watch this short introduction video with Aaron Shaw.

What is the CDSC?

The Community Data Science Collective (CDSC) is an interdisciplinary research group made of up of faculty and students at the University of Washington Department of Communication, the Northwestern University Department of Communication Studies, the Carleton College Computer Science Department, the School of Information at UT Austin, and the Purdue University School of Communication.

Learn more

If you’d like to learn more or get future updates about the Science of Community Dialogues, please join the low volume announcement list.

FOSSY 2025 Wrap-Up: Kaylea Champion “Plausible Slop: Generative AI and Open Source Cybersecurity”

For our final talk of the Science of Community track at FOSSY 2025, Kaylea Champion explored how generative AI tools are disrupting open source cybersecurity—not through advanced attacks, but by flooding communities with “plausible slop,” or misleading, low-effort reports. She shared research on the burden this places on experts, who must balance welcoming newcomers with filtering out noise. Drawing on historical parallels and case studies, she proposed strategies to address these challenges and invited community input to shape future solutions.

This is the final of our 11 part series sharing highlights from the Science of Community track at FOSSY 2025. Visit the FOSSY site for bio details and a full abstract.

FOSSY 2025 Wrap Up: Steve Feng and Anita Sarma “Glue Work Makes the Community Work: Sustaining OSS Through Invisible Labor”

Our tenth talk for the Science of Community Track at FOSSY 2025 featured Zixuan (Steve) Feng and Anita Sarma discussed how glue work —like maintaining code, updating docs, and supporting users—is essential to OSS success but often overlooked and undervalued. The talked about their teams 300+ OSS practitioner studies to define, trace, and elevate these contributions, offering practical strategies to recognize their impact.

This is the 10th of our 11 part series sharing highlights from the Science of Community track at FOSSY 2025. Visit the FOSSY site for bio details and a full abstract.

FOSSY 2025 Wrap Up: Igor Steinmacher “Lessons from a Decade of Open Source Sustainability Research”

Igor Steinmacher walked through lessons learned from a decade of OSS research for our 9th talk of the Science of Community Track. He explored long-term sustainability challenges in FOSS communities, including onboarding, maintainer burnout, and governance. He presented interventions like mentorship strategies, structured contribution paths, and the use of LLMs to support contributors and scale community engagement. Through case studies and longitudinal data, he offered a holistic vision for building more inclusive, resilient, and human-centered open source ecosystems.

This is the 9th of our 11 part series sharing highlights from the Science of Community track at FOSSY 2025. Visit the FOSSY site for bio details and a full abstract.

FOSSY 2025 Wrap Up: Dawn Foster on “Power Dynamics, Rug Pulls, and Other Impacts on FOSS Sustainability

The 8th presenter for the Science of Community track was Dawn Foster, who talked about the power imbalances in FOSS projects, and their potential for disruption. She explored real-world case studies and offered practical steps to help contributors make smarter, more sustainable choices.

This is the 8th of our 11 part series sharing highlights from the Science of Community track at FOSSY 2025. Visit the FOSSY site for bio details and a full abstract .