Adopting “third-party” end-user bots for managing online communities on platforms

A screenshot of the configuration panel for Moderator functions of a popular end-user bot called Dyno, adopted by millions of communities on Discord.

Bots made by end users are crucial to the success of online communities, helping community leaders moderate content as well as manage membership and engagement. But most folks don’t have the resources to develop custom bots and turn to existing bots shared by their peers. For example, on Discord, some especially popular bots are adopted by millions of communities. However, because these bots are ultimately third-party tools — made by neither the platform nor the community leader in question — they still come with several challenges. In particular, community leaders need to develop the right understandings about a bot’s nature, value, and use in order to adopt it into their community’s existing processes and culture.

In organizational research, these “understandings” are sometimes described as technological frames, a concept developed by Orlikowski & Gash (1994) as they studied why technologies became used in unexpected ways in organizational settings. When your technological frames are well-aligned with a tool’s design, you can imagine that it is easier to assess whether that tool will be useful and can be smoothly incorporated into your organization as intended. In the context of online communities, well-aligned frames can not only reduce the labor and time of bot adoption, but also help community leaders anticipate issues that might cause harm to the community. Our new paper looks to communities on Discord and asks: How do community leaders shift their technology frames of third-party bots and leverage them to address community needs?

Emergent social ecosystems around bot adoption

Our study interviewed 16 community leaders on Discord, walking through their experiences adopting third-party bots for their communities. These interviews underscore how community leaders have developed social ecosystems around bots: organic user-to-user networks of resources, aid, and knowledge about bots across communities.

Despite the decentralized arrangement of communities on Discord, users devised and took advantage of formal and informal opportunities to revise their understandings about bots, both supporting and constraining how bots became used. This was particularly important because third-party bots pose heightened uncertainties about their reliability and security, especially for bots used to protect the community from external threads (such as scammers). For example, interviewees laid out concerns about whether a bot developer could be trusted to keep their bot online, to respond to problems users had, and to manage sensitive information. The emergent social ecosystems helped users get recommendations from others, assess the reputation of bot developers, and consider whether the bot was a good fit for them along much more nuanced dimensions (in the case of one interviewee, the values of the bot developer mattered as well). They also created opportunities for people to directly get help in setting up bots and troubleshooting them, such as via engaged discussions with other users who had more experience.

Our findings underscore a couple of core reasons why we should care about these social ecosystems:

  1. Closing gaps in bot-related skills and knowledge. Across interviews, we saw patterns of people leveraging the resources and aid in social ecosystems to move towards using more powerful but complex bots. Ultimately, people with diverse technical backgrounds (including those who stated they had no technical background) were able to adopt and use bots — even bots involving code-like configurations in markdown languages that might normally pose barriers. We suggest that the diffusion of end-user tools on social platforms be matched with efforts to provide bottom-up social scaffoldings that support exploration, learning, and user discussion of those tools.
  2. Changing perceptions of the labor involved in bot adoption. The process of bot adoption as a deeply social one appeared to impact how people saw the labor they invested into it, shifting it into something fun and satisfying. Bot adoption was both collaborative, involving many individuals as a user discovered, evaluated, set up, and fine-tuned bots; and communal, with community members themselves taking part in some of these steps. We suggest that bot adoption can provide one avenue to deepen community engagement by creating new ways of participating and generating meta discussions about the community, as well as the platform.
  3. Shaping the assumptions around third-party tools. Social ecosystems enabled people to cherry-pick functions across bots, enabling creative wiggle room in curating a set of preferred functions. At the same time, people were constrained by social signals about what bots are and can do, why certain bots are worth adopting, and how the bot is used. For example, people often talked about genres of bots even though no such formal categories existed. We suggest that spaces where leaders from different communities interact with one another to discuss strategies and experiences can be impactful settings for further research, intervention, and design ideas.

Ultimately, the social nature of adopting third-party bots in our interviews offers insight into how we can better support the adoption of valuable user-facing tools across online communities. As online harms become more and more technically sophisticated (e.g., the recent rise of AI-generated disinformation), user-made bots that quickly respond to emerging issues will play an important role in managing communities — and will be even more valuable if they can be shared across communities. Further attention to the dynamics that enable tools to be used across communities with diverse norms and goals will be important as the risks that communities face, and the tools available to them, evolve.

Engage with us!

If you have thoughts, ideas, questions, we are always happy to talk – especially if you think there are community-facing resources we can develop from this work. There are a few ways to engage with us:

  • Drop a comment below this post!
  • Check out the full paper, available ✨ open access ✨ in the ACM Digital Library.
  • Come by the talks we’ll be giving:
    • at ICA2024 on Saturday, June 22, 2024 in the “Digital Networks, Platforms, and Organizing” session at 3:00-4:15PM in Coolangatta 4 (Star L3);
    • at CSCW2024 in November; schedule is still forthcoming!
  • Connect with us on social media or via email.

Come see us at CHI 2024!

We’re going to be at CHI! The Community Date Science Collective will be presenting work from group members and affiliates. CHI is taking place in Honolulu, Hawaiʻi from May 11th – 16th.

By Robert Linsdell from St. Andrews, Canada – Flight from Honolulu to Hilo. Over Sand Island and Honolulu (503729), CC BY 2.0

Jeremy Foote (Purdue University) coauthored “How Founder Motivations, Goals, and Actions Influence Early Trajectories of Online Communities” with Sanjay R Kairam. This work will be presented at “Online Communities: Engagement A” on Tuesday, May 14th at 9:45 a.m. You can also read about Jeremy and Sanjay’s work on our blog.

Carolyn Zou (Northwestern University) will be presenting with coauthor Helena Vasconcelos on their work “Validation Without Ground Truth? Methods for Trusts in Generative Simulations” at the CHI workshops HEAL (Human-Centered Evaluation and Auditing of Language Models) and TREW (Trust and Reliance in Evolving Human-AI Workflows). They will be presenting posters at both sessions and have been selected as a highlighted paper for HEAL and will be giving a presentation on Sunday, May 12th.

Ruijia Cheng (University of Washington) will be their presenting their research on “AXNav: Replaying Accessibility Tests from Natural Language” with cowriters Maryam Taeb, Eldon Schoop, Yue Jiang, Amanda Swearngin, and Jeffrey Nichols. This presentation will be taking place at “Universal Accessibility” on Tuesday, May 14th at 4:30 p.m.

CDSC affiliate Nicholas Vincent is receiving the Outstanding Dissertation Award for their research on “Economic Concentration and Dispossessive Data Use: Can HCI Solve Challenges from and to AI?“. Nicholas will also be presenting their papers “Pika: Empowering Non-Programmers to Author Executable Governance Policies in Online Communities” with Leijie Wang, Julija Rukanskaitė, and Amy X. Zhang at “Supporting Communities” on Thursday, May 16th at 11:00 a.m. and “A Canary in the AI Coal Mine: American Jews May Be Disproportionately Harmed by Intellectual Property Dispossession in Large Language Model Training” with Heila Precel, Brent Hecht, and Allison McDonald at “Politics of Data” on Wednesday, May 15th at 2:45 p.m.

Mandi Cai (Northwestern University) received an honorable mention award alongside coauthor Matthew Kay for their paper “Watching the Election Sausage Get Made: How Data Journalists Visualize the Vote Counting Process in U.S. Elections“. Mandi will be presenting this research at “Governance and Public Policies” on Wednesday, May 15th at 12:00 p.m.

CDSC welcomes Madison Deyo!

Madison Deyo has recently joined the CDSC as a Program Coordinator and we couldn’t be more thrilled to welcome her to the team!

Madison Deyo headshot.

Madison is based at Northwestern. With the CDSC, Madison’s role includes a mix of event planning and coordination; outreach and communications; and supporting the operations of the group. She also works with the Northwestern Center for Human-Computer Interaction + Design. Madison brings experience working with community-based non-profits in several different capacities.

Madison currently lives in Chicago, and grew up in Wisconsin, where she attended the University of Wisconsin-Madison. There, she received my B.S. in Art (with a focus on illustration) and Communications: Radio-TV-Film. In addition to her position at Northwestern, Madison also works as a freelance artist designing mead labels, tattoos, and occasionally album/EP covers. You can check out her portfolio.

Replication data release for examining how rules and rule-making across Wikipedias evolve over time

Screenshot of the same rule, Neutral Point of View, on five different language editions. Notably, the pages are different because they exist as connected but ultimately separate pages.

While Wikipedia is famous for its encyclopedic content, it may be surprising to realize that a whole other set of pages on Wikipedia help guide and govern the creation of the peer-produced encyclopedia. These pages extensively describe processes, rules, principles, and technical features of creating, coordinating, and organizing on Wikipedia. Because of the success of Wikipedia, these pages have provided valuable insights into how platforms might decentralize and facilitate participation in online governance. However, each language edition of Wikipedia has a unique set of such pages governing it respectively, even though they are part of the same overarching project: in other words, an under-explored opportunity to understand how governance operates across diverse groups.

In a manuscript published at ICWSM2022, we present descriptive analyses examining on rules and rule-making across language editions of Wikipedia motivated by questions like:

What happens when communities are both relatively autonomous but within a shared system? Given that they’re aligned in key ways, how do their rules and rule-making develop over time? What can patterns in governance work tell us about how communities are converging or diverging over time?

We’ve been very fortunate to share this work with the Wikimedia community since publishing the paper, such as the Wikipedia Signpost and Wikimedia Research Showcase. At the end of last year, we published the replication data and files on Dataverse after addressing a data processing issue we caught earlier in the year (fortunately, it didn’t affect the results – but yet another reminder to quadruple-check one’s data pipeline!). In the spirit of sharing the work more broadly since the Dataverse release, we summarize some of the key aspects of the work here.

Study design

In the project, we examined the five largest language editions of Wikipedia as distinct editing communities: English, German, Spanish, French and Japanese. After manually constructing lists of rules per wiki (resulting in 780 pages), we took advantage of two features on Wikipedia: the revision histories, which log every edit to every page; and the interlanguage links, which connect conceptually equivalent pages across language editions. We then conducted a series of analyses examining comparisons across and relationships between language editions.

Shared patterns across communities

Across communities, we observed that trends suggested that rule-making often became less open over time:

Figure 2 from the ICWSM paper
  • Most rules are created early in the life of the language edition community’s life. Over a nearly 20 year period, roughly 50-80% of the rules (depending on the language edition) were created within the first five years!
  • The median edit count to rule pages peaked in early years (between years 3 and 5) before tapering down. The percent of revisions dedicated to editing the actual rule text versus discussing it shifts towards discussion of rule across communities. These both suggest that rules across communities have calcified over time.

Said simply, these communities have very similar trends in rule-making towards formalization.

Divergence vs convergence in rules

Wikipedia’s interlanguage link (ILL) feature, as mentioned above, lets us explore how the rules being created and edited on communities relate to one another. While the trends above highlight similarities in rule-making, here, the picture about how the rule sets are similar or not is a bit more complicated.

On one hand, the top panel here shows that over time, all five communities see an increase in the proportion of rules in their rules sets that are unique to them individually. On the other hand, the bottom panel shows that editing efforts concentrate on rules that are more shared across communities.

Altogether, we see that communities sharing goals, technology, and a lot more develop substantial and sustained institutional variations; but it’s possible that broad, widely-shared rules created early may help keep them relatively aligned.

Key takeaways

Investigating governance across groups like Wikipedia is valuable for at least two reasons.

First, an enormous amount of effort has gone into studying governance on English Wikipedia, the largest and oldest language edition, to distill lessons about how we can meaningfully decentralize governance in online spaces. But, as prior work [e.g., 1] shows, language editions are often non-aligned in both the content they produce and how they organize that content. Some early stage work we did noted this held true for rule pages on the five language editions of Wikipedia explored here. In recent years, the Wikimedia Foundation itself has made several calls to understand dynamics and patterns beyond English Wikipedia. This work is in part in response to this movement.

Second, the questions explored in our work highlight a key tension in online governance today. While online communities are relatively autonomous entities, they often exist within social and technical systems that put them in relation with one another – whether directly or not. Effectively addressing concerns about online governance means understanding how distinct spaces online govern in ways that are similar or dissimilar, overlap or conflict, diverge and converge. Wikipedia can offer many lessons to this end because it has an especially decentralized and participatory vision of how to govern itself online, such as how patterns of formalization impact success and engagement. Future work we are working on continues in this vein – stay tuned!

New year, new job with us? CDSC is hiring!

Do you care about community, design, computing, and research? We are looking for a person to grow the public impact of the Community Data Science Collective (CDSC) and Northwestern University Center for Human Computer Interaction +Design (HCI+D). We are hiring a full time Program Coordinator to work in both groups. This person will focus on outreach, communications, research community development, strategic event planning, and administration for both the CDSC and HCI+D.

Although a portion of the work may be done remotely, attendance for in-person meetings and workshops is required and the position is located in Evanston on the Northwestern University campus. The average salary for similar positions at Northwestern is around $55,000 per year and includes excellent benefits (compensation details for this position can only be determined by Northwestern HR in the hiring process). We’re looking for a minimum 2 year commitment.

Duties

These fall into four categories, with specific examples in each listed below:

  1. Outreach & communications
    • Manage social media posting (LinkedIn, Mastodon, X, WordPress etc.)
    • Post events to listservs and websites
    • Advertise events such as the Collective’s “Science of Community” series and the Center’s “Thought Leader Dialogues”
    • Build contact-lists around specific events and topics
    • Share messages with internal and external audiences
  2. Research community development
    • Recruit participants to community events
    • Organize group retreats (3-4 year total)
    • Engage with community members of both the Collective and Center
  3. Strategic event planning
    • Develop and execute a strategic event plan for in-person/virtual events
    • Collaborate with Collective and Center members to plan and recruit speakers for events
  4. Administration:
    • Schedule and plan research meetings
    • Track and report on collective and center achievements
    • Draft annual research and donor reports
    • Document processes and initiatives

Core competencies:

  • Ability to use and learn web content management tools, such as wordpress, and wikis.
  • General organization
  • Communication (be clear, be concise)
  • Meeting facilitation
  • Managing upwards
  • Small/medium scale (20-50 people) event planning
  • Creative thinking and problem solving

Qualifications

Candidates must hold at least a bachelor’s degree. Familiarity with event planning, community management, project management, and/or scientific research is a plus, as is prior experience in the social or computer sciences, research organizations, online communities, and/or public interest technology and advocacy projects of any kind.

About Northwestern’s Center for HCI+Design and the Community Data Science Collective

The Community Data Science Collective is an interdisciplinary research group made up of faculty, students, and affiliates mainly at the University of Washington Department of Communication, the Northwestern University Department of Communication Studies, the Carleton College Computer Science Department, and the Purdue University School of Communication. To learn more about the Community Data Science Collective, you should check out our wiki, blog, and recent publications.

Northwestern’s Center for Human Computer Interaction + Design is an interdisciplinary research center that brings together researchers and practitioners from across the University to study, design, and develop the future of human and computer interaction at home, work, and play in the pursuit of new interaction paradigms to support a collaborative, sustainable, and equitable society.

Contact

Please contact Aaron Shaw with questions. Both the CDSC and the Center for HCI+D are committed to creating diverse, inclusive, equitable, and accessible environments and we look forward to working with someone who shares these values.

Ready to apply?

Please apply via the Northwestern University job posting (and note that the job ID is 49284). We will begin reviewing applications immediately (continuing on a rolling basis until the position is filled).

(revised to fix a broken link)

Join us! Call for Ph.D. Applications and Public Q&A Event

It’s Ph.D. application season and the Community Data Science Collective is recruiting! As always, we are looking for talented people to join our research group. Applying to one of the Ph.D. programs that the CDSC faculty members are affiliated with is a great way to get involved in research on communities, collaboration, and peer production.

Because we know that you may have questions for us that are not answered in this webpage, we will be hosting an open house and Q&A about the CDSC and Ph.D. opportunities on Friday, October 20 at 18:00 UTC (2:00pm US Eastern, 1:00pm US Central, 11:00am US Pacific). You can register online.

This post provides a very brief run-down on the CDSC, the different universities and Ph.D. programs our faculty members are affiliated with, and some general ideas about what we’re looking for when we review Ph.D. applications.

Group photo of the collective at a recent retreat.

What is the Community Data Science Collective?

The Community Data Science Collective (or CDSC) is a joint research group of (mostly quantitative) empirical social scientists and designers pursuing research about the organization of online communities, peer production, and learning and collaboration in social computing systems. We are based at Northwestern University, the University of Washington, Carleton College, Purdue University, and a few other places. You can read more about us and our work on our research group blog and on the collective’s website/wiki.

What are these different Ph.D. programs? Why would I choose one over the other?

This year the group includes three faculty principal investigators (PIs) who are actively recruiting PhD students: Aaron Shaw (Northwestern University), Benjamin Mako Hill (University of Washington in Seattle), and Jeremy Foote (Purdue University). Each of these PIs advise Ph.D. students in Ph.D. programs at their respective universities. Our programs are each described below.

Although we often work together on research and serve as co-advisors to students in each others’ projects, each faculty person has specific areas of expertise and interests. The reasons you might choose to apply to one Ph.D. program or to work with a specific faculty member could include factors like your previous training, career goals, and the alignment of your specific research interests with our respective skills.

At the same time, a great thing about the CDSC is that we all collaborate and regularly co-advise students across our respective campuses, so the choice to apply to or attend one program does not prevent you from accessing the expertise of our whole group. But please keep in mind that our different Ph.D. programs have different application deadlines, requirements, and procedures!

Faculty who are actively recruiting this year

If you are interested in applying to any of the programs, we strongly encourage you to reach out the specific faculty in that program before submitting an application.

Ph.D. Advisors

A photo of Jeremy Foote. He is wearing a grey shirt.
Jeremy Foote

Jeremy Foote is an Assistant Professor at the Brian Lamb School of Communication at Purdue University. He is affiliated with the Organizational Communication and Media, Technology, and Society programs. Jeremy’s research focuses on how individuals decide when and in what ways to contribute to online communities, how communities change the people who participate in them, and how both of those processes can help us to understand which things become popular and influential. Most of his research is done using data science methods and agent-based simulations.

A photo of Benjamin mako Hill. He is wearing a pink shirt.
Benjamin Mako Hill

Benjamin Mako Hill is an Associate Professor of Communication at the University of Washington. He is also adjunct faculty at UW’s Department of Human-Centered Design and Engineering (HCDE), Computer Science and Engineering (CSE) and Information School. Although many of Mako’s students are in the Department of Communication, he has also advised students in all three other departments—although he typically has more limited ability to admit students into those programs on his own and usually does so with a co-advisor in those departments. Mako’s research focuses on population-level studies of peer production projects, computational social science, efforts to democratize data science, and informal learning. Mako has also put together a webpage for prospective graduate students with some useful links and information..

A photo of Aaron Shaw. He is wearing a black shirt.
Aaron Shaw. (Photo credit: Nikki Ritcher Photography, cc-by-sa)

Aaron Shaw is an Associate Professor in the Department of Communication Studies at Northwestern. In terms of Ph.D. programs, Aaron’s primary affiliations are with the Media, Technology and Society (MTS) and the Technology and Social Behavior (TSB) Ph.D. programs (please note: the TSB program is a joint degree between Communication and Computer Science). Aaron also has a courtesy appointment in the Sociology Department at Northwestern, but he has not directly supervised any Ph.D. advisees in that department (yet). Aaron’s current projects focus on comparative analysis of the organization of peer production communities and social computing projects, participation inequalities in online communities, and collaborative organizing in pursuit of public goods.

What do you look for in Ph.D. applicants?

There’s no easy or singular answer to this. In general, we look for curious, intelligent people driven to develop original research projects that advance scientific and practical understanding of topics that intersect with any of our collective research interests.

To get an idea of the interests and experiences present in the group, read our respective bios and CVs (follow the links above to our personal websites). Specific skills that we and our students tend to use on a regular basis include consuming and producing social science and/or social computing (human-computer interaction) research; applied statistics and statistical computing, various empirical research methods, social theory and cultural studies, and more.

Formal qualifications that speak to similar skills and show up in your resume, transcripts, or work history are great, but we are much more interested in your capacity to learn, think, write, analyze, and/or code effectively than in your credentials, test scores, grades, or previous affiliations. It’s graduate school and we do not expect you to show up knowing how to do all the things already.

Intellectual creativity, persistence, and a willingness to acquire new skills and problem-solve matter a lot. We think doctoral education is less about executing tasks that someone else hands you and more about learning how to identify a new, important problem; develop an appropriate approach to solving it; and explain all of the above and why it matters so that other people can learn from you in the future. Evidence that you can or at least want to do these things is critical. Indications that you can also play well with others and would make a generous, friendly colleague are really important too.

All of this is to say, we do not have any one trait or skill set we look for in prospective students. We strive to be inclusive along every possible dimension. Each person who has joined our group has contributed unique skills and experiences as well as their own personal interests. We want our future students and colleagues to do the same.

Now what?

Still not sure whether or how your interests might fit with the group? Still have questions? Still reading and just don’t want to stop? Follow the links above for more information. Feel free to send at least one of us an email. We are happy to try to answer your questions and always eager to chat. You can also join our open house on October 20 at 2:00pm ET (UTC-4).

The State of Wikimedia Research, 2022–2023

Wikimania, the annual global conference of the Wikimedia movement, took place in Singapore last month. For the first time since 2019, the conference was held in person again. It was attended by over 670 people in-person and more than 1,500 remotely.

At the conference, Benjamin Mako Hill, Tilman Bayer, and Miriam Redi presented “The State of Wikimedia Research: 2022–2023”, an overview of scholarship and academic research on Wikipedia and other Wikimedia projects from the last year. This resumed an annual Wikimania tradition started by Mako back in 2008 as a graduate student, aiming to provide “a quick tour … of the last year’s academic landscape around Wikimedia and its projects geared at non-academic editors and readers.” With hundreds of research publications every year featuring Wikipedia in their title (and more recently, Wikidata too), is it of course impossible to cover all important research results within one hour. Hence our presentation aimed to identify a set of important themes that attracted researchers’ attention during the past year, and illustrate each theme with a brief “research postcard” summary of one particular publication. Unfortunately, Miriam was not able to be in Singapore to present..

This year’s presentation focused on seven such research themes:

Theme 1. Generative AI and large language models
The boom in generative AI and LLMs triggered by the release of ChatGPT has affected Wikimedia research deeply. As an example, we highlighted a preprint that used Wikipedia to enhance the factual accuracy of a conversational LLM-based chatbot.

Theme 2. Wikidata as a community
While Wikidata is the subject of over 100 published studies each year, the vast majority of these have been primarily concerned with the project’s content as a database which scientists use to advance research about e.g. the semantic web, knowledge graphs and ontology management. This year also saw several papers studying Wikidata as a community, including a study of how Wikidata contributors use talk page to coordinate (preprint).

Theme 3. Cross-project collaboration
Beyond Wikipedia and Wikidata, Wikimedia sister projects have attracted comparatively little researcher attention over the years. We highlighted one of the very first research publication in the social sciences that studied Wikimedia Commons, the free media repository, examining how it interconnects with English Wikipedia.

Theme 4. Rules and governance
Research on rules and governance continues to attract researchers’ attention. Here, we featured a new paper by a political scientist that documented important changes in how English Wikipedia’s NPoV (Neutral Point of View) policy has been applied over time, and used this to advance an explanation for political change in general.

Theme 5. Wikipedia as a tool to measure bias
While Wikimedia research has often focused on Wikipedia’s own biases, researchers have also turned to Wikipedia to construct baselines against which to measure and mitigate biases elsewhere. We highlighted an example of Meta’s AI researchers doing this for their Llama 2 large language model.

Theme 6. Measuring Wikipedia’s own content bias
Despite the huge interest in content gaps along dimensions such as race and gender, systematic approaches to measuring them have not been as frequent as one might hope. We featured a paper that advanced our understanding in this regard, presented a useful method, and is also one of the first to study differences in intersectional identities.

Theme 7. Critical and humanistic approaches
Although most of the published research work related to Wikipedia is based in the sciences or engineering disciplines, a growing body of humanities scholarship can offer important insights as well. We highlighted a recent humanities paper about the measuring of race and ethnicity gaps on Wikipedia, which focused in particular on gaps in such measurements themselves, placing them into a broader social context.

We invite you to watch the video recording on Youtube or our self-hosted media server or peruse the annotated slides from the talk.

Again, this work represents just a tiny fraction of what has been published about Wikipedia in the last year. In particular, we avoided research that was presented elsewhere in Wikimania’s research track.

To keep up to date with the Wikimedia research field throughout the year, consider subscribing to the monthly Wikimedia Research Newsletter and its associated Twitter and Mastodon feeds which are maintained by Miriam and Tilman.


This post was written by Benjamin Mako Hill and Tilman Bayer.

Community Dialogue on Digital Inequalities

Join the Community Data Science Collective (CDSC) for our 5th Science of Community Dialogue! This Community Dialogue will take place on May 19 at 10:00 am PDT (18:00 UTC). This Dialogue focuses on digital inequalities and online community participation. Professor Hernan Galperin (University of Southern California) will join Floor Fiers (Northwestern University) to present recent research on topics including:

  • Inequalities in online access and participation
  • Differentiated participation in online communities
  • Causes and consequences of online inequalities
  • Digital skills as a barrier to online participation
  • Combating digital discrimination

A full session descriptions is on our website. Register online

What is a Dialogue?

The Science of Community Dialogue Series is a series of conversations between researchers, experts, community organizers, and other people who are interested in how communities work, collaborate, and succeed. You can watch this short introduction video with Aaron Shaw.

What is the CDSC?

The Community Data Science Collective (CDSC) is an interdisciplinary research group made of up of faculty and students at the University of Washington Department of Communication, the Northwestern University Department of Communication Studies, the Carleton College Computer Science Department, and the Purdue University School of Communication.

Learn more

If you’d like to learn more or get future updates about the Science of Community Dialogues, please join the low volume announcement list.

Excavating online futures past

Cover of Kevin Driscoll's book, The Modem World.

The International Journal of Communication (IJOC) has just published my review of Kevin Driscoll’s The Modem World: A Prehistory of Social Media (Yale UP, 2022).

In The Modem World, Driscoll provides an engaging social history of Bulletin Board Systems (BBSes), an early, dial-up precursor to social media that predated the World Wide Web. You might have heard of the most famous BBSes—likely Stuart Brand’s Whole Earth ‘Lectronic Link, or the WELL—but, as Driscoll elaborates, there were many others. Indeed, thousands of decentralized, autonomous virtual communities thrived around the world in the decades before the Internet became accessible to the general public. Through Driscoll’s eyes, these communities offer a glimpse of a bygone sociotechnical era and that prefigured and shaped our own in numerous ways. The “modem world” also suggests some paths beyond our current moment of disenchantment with the venture-funded, surveillance capitalist, billionaire-backed platforms that dominate social media today.

The book, like everything of Driscoll’s that I’ve ever read, is both enjoyable and informative and I recommend it for a number of reasons. I also (more selfishly) recommend the book review, which was fun to write and is just a few pages long. I got helpful feedback along the way from Yibin Fan, Kaylea Champion, and Hannah Cutts.

Because IJOC is an open access journal that publishes under a CC-BY-NC-ND license, you can read the review without paywalls, proxies, piracy, etc. Please feel free to send along any comments or feedback! For example, at least one person (who I won’t name here) thinks I should have emphasized the importance of porn in Driscoll’s account more heavily! While porn was definitely an important part of the BBS universe, I didn’t think it was such a central component of The Modem World. Ymmv?

Shaw, A. (2023). Kevin Driscoll, The Modem World: A Prehistory of Social Media. International Journal Of Communication, 17, 4. Retrieved from https://ijoc.org/index.php/ijoc/article/view/21215/4162

Effects of Algorithmic Flagging on Fairness: Quasi-experimental Evidence from Wikipedia

Many online platforms are adopting machine learning as a tool to maintain order and high quality information in the face of massive influxes of of user generated content. Of course, machine learning algorithms can be inaccurate, biased or unfair. How do signals from machine learning predictions shape the fairness of online content moderation? How can we measure an algorithmic flagging system’s effects?

In our paper published at CSCW 2021, I (Nate TeBlunthuis) together with Benjamin Mako Hill and Aaron Halfaker analyzed the RCFilters system: an add-on to Wikipedia that highlights and filters edits that a machine learning algorithm called ORES identifies as likely to be damaging to Wikipedia. This system has been deployed on large Wikipedia language editions and is similar to other algorithmic flagging systems that are becoming increasingly widespread. Our work measures the causal effect of being flagged in the RCFilters user interface.

Screenshot of Wikipedia edit metadata on Special:RecentChanges with RCFilters enabled. Highlighted edits with a colored circle to the left side of other metadata are flagged by ORES. Different circle and highlight colors (white, yellow, orange, and red in the figure) correspond to different levels of confidence that the edit is damaging. RCFilters does not specifically flag edits by new accounts or unregistered editors, but does support filtering changes by editor types.

Our work takes advantage of the fact that RCFilters, like many algorithmic flagging systems, create discontinuities in the relationship between the probability that a moderator should take action and whether a moderator actually does. This happens because the output of machine learning systems like ORES is typically a continuous score (in RCFilters, an estimated probability that a Wikipedia edit is damaging), while the flags (in RCFilters, the yellow, orange, or red highlights) are either on or off and are triggered when the score crosses some arbitrary threshold. As a result, edits slightly above the threshold are both more visible to moderators and appear more likely to be damaging than edits slightly below. Even though edits on either side of the threshold have virtually the same likelihood of truly being damaging, the flagged edits are substantially more likely to be reverted. This fact lets us use a method called regression discontinuity to make causal estimates of the effect of being flagged in RCFilters.

Charts showing the probability that an edit will be reverted as function of ORES scores in the neighborhood of the discontinuous threshold that triggers the RCfilters flag. The jump in the increase in reversion chances is larger for registered editors compared to unregistered editors at both thresholds.

To understand how this system may effect the fairness of Wikipedia moderation, we estimate the effects of flagging on edits on different groups of editors. Comparing the magnitude of these estimates lets us measure how flagging is associated with several different definitions of fairness. Surprisingly, we found evidence that these flags improved fairness for categories of editors that have been widely perceived as troublesome—particularly unregistered (anonymous) editors. This occurred because flagging has a much stronger effect on edits by the registered than on edits by the unregistered.

We believe that our results are driven by the fact algorithmic flags are especially helpful for finding damage that can’t be easily detected otherwise. Wikipedia moderators can see the editor’s registration status in the recent changes, watchlists, and edit history. Because unregistered editors are often troublesome, Wikipedia moderators’ attention is often focused on their contributions, with or without algorithmic flags. Algorithmic flags make damage by registered editors (in addition to unregistered editors) much more detectable to moderators and so help moderators focus on damage overall, not just damage by suspicious editors. As a result, the algorithmic flagging system decreases the bias that moderators have against unregistered editors.

This finding is particularly surprising because the ORES algorithm we analyzed was itself demonstrably biased against unregistered editors (i.e., the algorithm tended to greatly overestimate the probability that edits by these editors were damaging). Despite the fact that the algorithms were biased, their introduction could still lead to less biased outcomes overall.

Our work shows that although it is important to design predictive algorithms to not have such biases, it is equally important to study fairness at the level of the broader sociotechnical system. Since we first published a preprint of our paper, a followup piece by Leijie Wang and Haiyi Zhu replicated much of our work and showed that differences between different Wikipedia communities may be another important factor driving the effect of the system. Overall, this work suggests that social signals and social context can interact with algorithmic signals and together these can influence behavior in important and unexpected ways.


The full citation for the paper is: TeBlunthuis, Nathan, Benjamin Mako Hill, and Aaron Halfaker. 2021. “Effects of Algorithmic Flagging on Fairness: Quasi-Experimental Evidence from Wikipedia.” Proceedings of the ACM on Human-Computer Interaction 5 (CSCW): 56:1-56:27. https://doi.org/10.1145/3449130.

We have also released replication materials for the paper, including all the data and code used to conduct the analysis and compile the paper itself.