On The Challenges of Governing the Online Commons

“Elinor Ostrom and the eight principles of governing the commons.” Picture: Inkylab. License: CC-BY-SA 4.0

Over the past several months (post-general exam!), I have been thinking and reading about organizational and institutional perspectives on the governance of platforms and the online communities that populate them. While much of the research on the emerging area of “platform governance”1 draws from legal traditions or socio-technical approaches, there is also a smaller subset of scholars drawing from political science and democratic theory, thinking about designing governance structures at the level of groups, organizations, and institutions that prove resilient to various collective threats.

I think these approaches hold a lot of promise. As far as addressing one collective threat I am interested in – the strategic manipulation of information environments – most interventions I have seen have either focused on empowering individuals to be more discerning of the information they encounter online or proposing structural changes to features of platforms, such as algorithmic ranking, that dampen the virality of false or misleading information. These are, respectively, micro and macro-level interventions. The integration of participatory and distributed self-governance approaches into existing and emerging platforms is distinct: it is a meso-level intervention, and meso-level approaches remain both theoretically and empirically under-explored in discussions of platform governance.

I recently read three works that do explore this meso layer, however: Paul Gowder’s The Networked Leviathan, Nathan Schneider’s Governable Spaces, and Jennifer Forestal’s Beyond Gatekeeping. All three draw on the work of scholars that look at governance dynamics in offline spaces – in particular, the ideas of political economist Elinor Ostrom and philosopher John Dewey feature prominently – to argue that centralized platforms that practice top-down content moderation are fundamentally hostile to democratic inquiry and practice. Gowder, for example, describes this condition as a “democratic deficit” in the form of governance structures that are fundamentally unaccountable to their users. Naturally, this democratic deficit leads to negative outcomes – online spaces are easily manipulated and degraded by motivated actors. To guard against this, Gowder, Schneider, and Forestal offer various proposals for the integration of participatory structures into these platforms –ones composed of workers, civil society members, and everyday users — into platform governance and decision-marking.

I am on board with these approaches’ diagnosis of the problem, but I think the proposed solutions require more iteration. One thing I worry about is that proposals for integrating participatory and distributed governance into online platforms do not sufficiently take into account the qualitative differences between online spaces and the offline settings researchers have previously studied. When I was reading Ostrom’s Governing the Commons, for example, from which many of these interventions take at least some inspiration, I was struck by the three similarities that she noted virtually all of the common-pool resource settings she analyzed shared:

  • They had stable populations over long periods of time. Here’s how Ostrom describes it: “Individuals have shared a past and expect to share a future. It is important for individuals to maintain their reputations as reliable members of the community. These individuals live side by side and farm the same plots year after year. They expect their children and their grandchildren to inherit their land. In other words, their discount rates are low. If costly investments in provision are made at one point in time, the proprietors – or their families – are likely to reap the benefits.”
  • Norms of reciprocity and interdependence evolved in these settings among a largely similar group of individuals with shared interests. Ostrom explains: “Many of these norms make it feasible for individuals to live in close interdependence on many fronts without excessive conflict. Further, a reputation for keeping promises, honest dealings, and reliability in one arena is a valuable asset. Prudent, long-term self-interest reinforces the acceptance of the norms of proper behavior. None of these situations involves participants who vary greatly in regard to ownership of assets, skills, knowledge. ethnicity, race, or other variables that could strongly divide a group of individuals (R.Johnson and Libecap 1982).”
  • These cases were the success stories! Ostrom clarifies that the cases she analyzed “were specifically selected because they have endured while others have failed.” In other words, they already had sustainable resource systems and robust institutions in place.

    Most (virtually all?) online platforms, and the communities that inhabit them, do not share these properties. In online spaces, individuals tend to be geographically scattered across the globe, and there’s no incentive to sustainably maintain the community for future generations to inherit, like there is with a plot of land. Moreover, members of online communities tend to have varying levels of commitment, and the anonymity and distance offered by technology makes norms of social reciprocity and interdependence harder (although not impossible) to cultivate.

The CPRs Ostrom studied were already facing uncertain and complex background conditions — but they also possessed distinct qualities conducive for success. I generally think online spaces, and the digital institutions that govern them, do not possess these qualities, and are thus even more vulnerable to threats like appropriation, pollution, or capture than the CPRs Ostrom studied. Because of this, I think a direct porting of most of Ostrom’s design principles to online governing institutions is probably insufficient. But I see an evolved set of these principles that explicitly addresses the power differentials and adversarial incentives baked into the design of social software as one way forward. What these principles could look like should be the subject of future empirical research, and maybe a future post on this blog. I am excited that researchers are exploring these meso-level interventions, which is where I think a lot of the solution lies.


  1. Gorwa (2019) offers a definition of platform governance: “a concept intended to capture the layers of governance relationships structuring interactions between key parties in today’s platform society, including platform companies, users, advertisers, governments, and other political actors.” ↩︎

A new paper on the risk of nationalist governance capture in self-governed Wikipedia projects

Wikipedia is one of the most visited websites in the world and the largest online repository of human knowledge. It is also both a target of and a defense against misinformation, disinformation, and other forms of online information manipulation. Importantly, its 300 language editions are self-governed—i.e., they set most of their rules and policies. Our new paper asks: What types of governance arrangements make some self-governed online groups more vulnerable to disinformation campaigns? We answer this question by comparing two Wikipedia language editions—Croatian and Serbian Wikipedia. Despite relying on common software and being situated in a common sociolinguistic environment, these communities differed in how successfully they responded to disinformation-related threats.

For nearly a decade, the Croatian language version of Wikipedia was run by a cabal of far-right nationalists who edited articles in ways that promoted fringe political ideas and involved cases of historical revisionism related to the Ustaše regime, a fascist movement that ruled the Nazi puppet state called the Independent State of Croatia during World War II. This cabal seized complete control of the governance of the encyclopedia, banned and blocked those who disagreed with them, and operated a network of fake accounts to give the appearance of grassroots support for their policies.

Thankfully, Croatian Wikipedia appears to be an outlier. Though both the Croatian and Serbian language editions have been documented to contain nationalist bias and historical revisionism, Croatian Wikipedia alone seems to have succumbed to governance capture: a takeover of the project’s mechanisms and institutions of governance by a small group of users.

The situation in Croatian Wikipedia was well-documented and is now largely fixed, but still know very little about why Croatian Wikipedia was taken over, while other language editions seem to have rebuffed similar capture attempts. In a new paper that is accepted for publication in the Proceedings of the ACM: Human-Computer Interaction (CSCW), we present an interview-based study that tries to explain why Croatian was captured while several other editions facing similar contexts and threats fared better.

Short video presentation of the work given at Wikimania in August 2023.

We interviewed 15 participants from both the Croatian and Serbian Wikipedia projects, as well as the broader Wikimedia movement. Based on insights from these interviews, we arrived at three propositions that, together, help explain why Croatian Wikipedia succumbed to capture while Serbian Wikipedia did not: 

  1. Perceived Value as a Target. Is the project worth expending the effort to capture?
  2. Bureaucratic Openness. How easy is it for contributors outside the core founding team to ascend to local governance positions?
  3. Institutional Formalization. To what degree does the project prefer personalistic, informal forms of organization over formal ones?
The conceptual model from our paper, visualizing possible institutional configurations among Wikipedia projects that affect the risk of governance capture. 

We found that both Croatian Wikipedia and Serbian Wikipedia were attractive targets for far-right nationalist capture due to their sizable readership and resonance with a national identity. However, we also found that the two projects diverged early on in their trajectories in terms of how open they remained to new contributors ascending to local governance positions and the degree to which they privileged informal relationships over formal rules and processes as organizing principles of the project. Ultimately, Croatian’s relative lack of bureaucratic openness and rules constraining administrator behavior created a window of opportunity for a motivated contingent of editors to seize control of the governance mechanisms of the project. 

Though our empirical setting was Wikipedia, our theoretical model may offer insight into the challenges faced by self-governed online communities more broadly. As interest in decentralized alternatives to Facebook and X (formerly Twitter) grows, communities on these sites will likely face similar threats from motivated actors. Understanding the vulnerabilities inherent in these self-governing systems is crucial to building resilient defenses against threats like disinformation. 

For more details on our findings, take a look at the preprint of our paper.


Preprint on arxiv.org: https://arxiv.org/abs/2311.03616. The paper has been accepted for publication in Proceedings of the ACM on Human-Computer Interaction (CSCW) and will be presented at CSCW in 2024. This blog post and the paper it describes are collaborative work by Zarine Kharazian, Benjamin Mako Hill, and Kate Starbird.