New year, new job with us? CDSC is hiring!

Do you care about community, design, computing, and research? We are looking for a person to grow the public impact of the Community Data Science Collective (CDSC) and Northwestern University Center for Human Computer Interaction +Design (HCI+D). We are hiring a full time Program Coordinator to work in both groups. This person will focus on outreach, communications, research community development, strategic event planning, and administration for both the CDSC and HCI+D.

Although a portion of the work may be done remotely, attendance for in-person meetings and workshops is required and the position is located in Evanston on the Northwestern University campus. The average salary for similar positions at Northwestern is around $55,000 per year and includes excellent benefits (compensation details for this position can only be determined by Northwestern HR in the hiring process). We’re looking for a minimum 2 year commitment.

Duties

These fall into four categories, with specific examples in each listed below:

  1. Outreach & communications
    • Manage social media posting (LinkedIn, Mastodon, X, WordPress etc.)
    • Post events to listservs and websites
    • Advertise events such as the Collective’s “Science of Community” series and the Center’s “Thought Leader Dialogues”
    • Build contact-lists around specific events and topics
    • Share messages with internal and external audiences
  2. Research community development
    • Recruit participants to community events
    • Organize group retreats (3-4 year total)
    • Engage with community members of both the Collective and Center
  3. Strategic event planning
    • Develop and execute a strategic event plan for in-person/virtual events
    • Collaborate with Collective and Center members to plan and recruit speakers for events
  4. Administration:
    • Schedule and plan research meetings
    • Track and report on collective and center achievements
    • Draft annual research and donor reports
    • Document processes and initiatives

Core competencies:

  • Ability to use and learn web content management tools, such as wordpress, and wikis.
  • General organization
  • Communication (be clear, be concise)
  • Meeting facilitation
  • Managing upwards
  • Small/medium scale (20-50 people) event planning
  • Creative thinking and problem solving

Qualifications

Candidates must hold at least a bachelor’s degree. Familiarity with event planning, community management, project management, and/or scientific research is a plus, as is prior experience in the social or computer sciences, research organizations, online communities, and/or public interest technology and advocacy projects of any kind.

About Northwestern’s Center for HCI+Design and the Community Data Science Collective

The Community Data Science Collective is an interdisciplinary research group made up of faculty, students, and affiliates mainly at the University of Washington Department of Communication, the Northwestern University Department of Communication Studies, the Carleton College Computer Science Department, and the Purdue University School of Communication. To learn more about the Community Data Science Collective, you should check out our wiki, blog, and recent publications.

Northwestern’s Center for Human Computer Interaction + Design is an interdisciplinary research center that brings together researchers and practitioners from across the University to study, design, and develop the future of human and computer interaction at home, work, and play in the pursuit of new interaction paradigms to support a collaborative, sustainable, and equitable society.

Contact

Please contact Aaron Shaw with questions. Both the CDSC and the Center for HCI+D are committed to creating diverse, inclusive, equitable, and accessible environments and we look forward to working with someone who shares these values.

Ready to apply?

Please apply via the Northwestern University job posting (and note that the job ID is 49284). We will begin reviewing applications immediately (continuing on a rolling basis until the position is filled).

(revised to fix a broken link)

Excavating online futures past

Cover of Kevin Driscoll's book, The Modem World.

The International Journal of Communication (IJOC) has just published my review of Kevin Driscoll’s The Modem World: A Prehistory of Social Media (Yale UP, 2022).

In The Modem World, Driscoll provides an engaging social history of Bulletin Board Systems (BBSes), an early, dial-up precursor to social media that predated the World Wide Web. You might have heard of the most famous BBSes—likely Stuart Brand’s Whole Earth ‘Lectronic Link, or the WELL—but, as Driscoll elaborates, there were many others. Indeed, thousands of decentralized, autonomous virtual communities thrived around the world in the decades before the Internet became accessible to the general public. Through Driscoll’s eyes, these communities offer a glimpse of a bygone sociotechnical era and that prefigured and shaped our own in numerous ways. The “modem world” also suggests some paths beyond our current moment of disenchantment with the venture-funded, surveillance capitalist, billionaire-backed platforms that dominate social media today.

The book, like everything of Driscoll’s that I’ve ever read, is both enjoyable and informative and I recommend it for a number of reasons. I also (more selfishly) recommend the book review, which was fun to write and is just a few pages long. I got helpful feedback along the way from Yibin Fan, Kaylea Champion, and Hannah Cutts.

Because IJOC is an open access journal that publishes under a CC-BY-NC-ND license, you can read the review without paywalls, proxies, piracy, etc. Please feel free to send along any comments or feedback! For example, at least one person (who I won’t name here) thinks I should have emphasized the importance of porn in Driscoll’s account more heavily! While porn was definitely an important part of the BBS universe, I didn’t think it was such a central component of The Modem World. Ymmv?

Shaw, A. (2023). Kevin Driscoll, The Modem World: A Prehistory of Social Media. International Journal Of Communication, 17, 4. Retrieved from https://ijoc.org/index.php/ijoc/article/view/21215/4162

How to cite Wikipedia (better)

 Two participants of the "Rally to Restore Sanity and/or Fear" in Washington D.C. (USA), holding signs saying "Wikipedia is a valid source" and "citation needed." Photo by Kat Walsh (Wikipedia User: Mindspillage), October 30, 2010, CC-BY-SA 3.0.
Two participants of the “Rally to Restore Sanity and/or Fear” in Washington D.C. (USA), holding signs saying “Wikipedia is a valid source” and “citation needed.” October 30, 2010. Kat Walsh (User:Mindspillage), CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

Wikipedia provides the best and most accessible single source of information on the largest number of topics in the largest number of languages. If you’re anything like me, you use it all the time. If you (also like me) use Wikipedia to inform your research, teaching, or other sorts of projects that result in shared, public, or even published work, you may also want to cite Wikipedia. I wrote a short tutorial to help people do that more accurately and effectively.

The days when teachers and professors banned students from citing Wikipedia are perhaps not entirely behind us, but do you know what to do if you find yourself in a situation where it is socially/professionally acceptable to cite Wikipedia (such as one of my classes!) and you want to do so in a responsible, durable way?

More specifically, what can you do about the fact that any Wikipedia page you cite can and probably will change? How do you provide a useful citation to a dynamic web resource that is continuously in flux?

This question has come up frequently enough in my classes over the years, that I drafted a short tutorial on doing better Wikipedia citations for my students back in 2020. It’s been through a few revisions since then and I don’t find it completely embarrassing, so I am blogging about it now in the hopes that others might find it useful and share more widely. Also, since it’s on my research group’s wiki, you (and anyone you know) can even make further revisions or chat about it with me on my user:talk page.

You might be thinking, "so wait, does this mean I can cite Wikipedia for anything"??? To which I would respond "Just hold on there, cowboy."

Wikipedia is, like any other information source, only as good as the evidence behind it. In that regard, nothing about my recommendations here make any of the information on Wikipedia any more reliable than it was before. You have to use other skills and resources to assess the quality of the information you’re citing on Wikipedia (e.g., the content/quality of the references used to support the claims made in any given article).

Like I said above, the problem this really tries to solve is more about how to best cite something on Wikipedia, given that you have some good reason to cite it in the first place.

CDSC students, courses, & (award winning!) instructors featured by Wiki Education

Wiki Education (a.k.a., WikiEdu) is an independent non-profit organization that promotes the integration of Wikipedia into education and classrooms. In pursuit of this mission, WikiEdu has created incredible resources for students and instructors, including tools that facilitate classroom assignments where students create and improve Wikipedia articles.

Wiki Education Foundation
Wiki Education is great! (And the feeling seems to be mutual based on their recent blogposts)

In courses at both Northwestern and the University of Washington, CDSC faculty and students have offered courses with Wikipedia assignments for over a decade. In the past two weeks, WikiEdu has featured the most recent instances of these courses on their blog.

The first WikiEdu post celebrated the work of a team of Northwestern students that included Carl Colglazier (TSB and CDSC Ph.D. student) and Hannah Yang (undergraduate Communication Studies major and former CDSC research assistant). The team, all members of the Online Communities & Crowds course I taught with CDSC Ph.D. student Sohyeon Hwang in Winter 2022, overhauled an article on Inclusive design in English Wikipedia. Since the article’s initial publication back in March, other Wikipedia editors have improved it further and it has attracted over 10,000 pageviews. Amazing work, team!

The second post celebrates UW Communication doctoral student Kaylea Champion, recipient of an Outstanding Teaching Award from the Communication Department on the strength of her work in another Winter 2022 undergraduate course on Online Communities (also taught by Benjamin Mako Hill) that features a Wikipedia assignment. Several of Kaylea’s students thought so highly of her work in the course that they collaborated in nominating her for the award. Kaylea enjoyed the experience enough that she’s about to offer the course again as the lead instructor at UW this upcoming Winter term. I should also note that Kaylea has been nominated for a university-wide award, but we won’t know the outcome of that process for a while yet. Congratulations, Kaylea!

The public recognition of CDSC students and teaching is gratifying and provides a great reminder of why assignments that ask students to edit Wikipedia are so valuable in the first place. Most fundamentally, editing Wikipedia engages students in the production of public, open access knowledge resources that serve a much greater and broader purpose than your typical term paper, pop quiz, or exam. When students develop encyclopedic materials on topics of their interest, motivated undergraduates like Hannah Yang can directly connect coursework with practical, real-world concerns in ways that build on the expertise of graduate students like Carl Colglazier. This kind of school work creates unusually high impact products. Kaylea Champion puts the idea eloquently in that WikiEdu post: “Instead of locking away my synthesis efforts in a paper no one but my instructors would read, the Wikipedia assignment pushed me to address the public.”

Just think, how many people ever read a word of most college (or high school or graduate school) term papers? By contrast, the Wikipedia articles created by our students have routinely been viewed over 100,000 times in aggregate by the end of the term in which we offer the course. Extrapolate this out over a decade and our students’ work has likely been read millions of times by now. As with other content on Wikipedia, this work will shape public discourse, including judicial decisions, scientific research, search engine results, and more. There’s absolutely nothing academic about that!

Fool’s gold? The perils of using online survey samples to study online behavior

The OG Mechanical Turk (public domain via Wikimedia Commons). It probably was not useful for unbiased survey research sampling either.

When it comes to research about participation in social media, sampling and bias are topics that often get ignored or politely buried in the "limitations" sections of papers. This is even true in survey research using samples recruited through idiosyncratic sites like Amazon’s Mechanical Turk. Together with Eszter Hargittai, I (Aaron) have a new paper (pdf) out in the International Journal of Communication (IJOC) that illustrates why ignoring sampling and bias in online survey research about online participation can be a particularly bad idea.

Surveys remain a workhorse method of social science, policy, and market research. But high-quality survey research that produces generalizable insights into big (e.g., national) populations is expensive, time-consuming, and difficult. Online surveys conducted through sites like Amazon Mechanical Turk (AMT), Qualtrics, and others offer a popular alternative for researchers looking to reduce the costs and increase the speed of their work. Some people even go so far as to claim that AMT has "ushered in a golden age in survey research" (and focus their critical energies on other important issues with AMT, like research ethics!).

Despite the hype, the quality of the online samples recruited through AMT and other sites often remains poorly or incompletely documented. Sampling bias online is especially important for research that studies online behaviors, such as social media use. Even with complex survey weighting schemes and sophisticated techniques like multilevel regression with post-stratification (MRP), surveys gathered online may incorporate subtle sources of bias because the people who complete the surveys online are also more likely to engage in other kinds of activities online.

Surprisingly little research has investigated these concerns directly. Eszter and I do so by using a survey instrument administered concurrently on AMT and a national sample of U.S. adults recruited through NORC at the University of Chicago (note that we published another paper in Socius using parts of the same dataset last year). The results suggest that AMT survey respondents are significantly more likely to use numerous social media, from Twitter to Pinterest and Reddit, as well as have significantly more experiences contributing their own online content, from posting videos to participating in various online forums and signing online petitions.

Such findings may not be shocking, but prevalent research practices often overlook the implications: you cannot rely on a sample recruited from an online platform like AMT to map directly to a general population when it comes to online behaviors. Whether AMT has created a survey research "golden age" or not, analysis conducted on a biased sample produces results that are less valuable than they seem.

CDSC is hiring research assistants

The Northwestern University branch of the Community Data Science Collective (CDSC) is hiring research assistants. CDSC is an interdisciplinary research group made of up of faculty and students at multiple institutions, including Northwestern University, Purdue University, and the University of Washington. We’re social and computer scientists studying online communities such as Wikipedia, Reddit, Scratch, and more.

Screenshot from a recent remove meeting of the CDSC
A screenshot from a recent remote meeting of the CDSC…

Recent work by the group includes studies of participation inequalities in online communities and the gig economy, comparisons of different online community rules and norms, and evaluations of design changes deployed across thousands of sites. More examples and information can be found on our list of publications and our research blog (you’re probably reading our blog right now).

This posting is specifically to work on some projects through the Northwestern University part of the CDSC. Northwestern Research Assistants will contribute to data collection, analysis, documentation, and administration on one (or more) of the group’s ongoing projects. Some research projects you might help with include:

  • A study of rules across the five largest language editions of Wikipedia.
  • A systematic literature review on the gig economy.
  • Interviews with contributors to small, niche subreddit communities.
  • A large-scale analysis of the relationships between communities.

Successful applicants will have an interest in online communities, social science or social computing research, and the ability to balance collaborative and independent work. No specialized skills are required and we will adapt work assignments and training to the skills and interests of the person(s) hired. Relevant skills might include: coursework, research, and/or familiarity with digital media, online communities, human computer interaction, social science research methods such as interviewing, applied statistics, and/or data science. Relevant software experience might include: R, Python, Git, Zotero, or LaTeX. Again, no prior experience or specialized skills are required. 

Expected minimum time commitment is 10 hours per week through the remainder of the Winter quarter (late March) with the possibility of working additional hours and/or continuing into the Spring quarter (April-June). All work will be performed remotely.

Interested applicants should submit a resume (or CV) along with a short cover letter explaining your interest in the position and any relevant experience or skills. Applicants should indicate whether you would prefer to pursue this through Federal work-study, for course credit (most likely available only to current students at one of the institutions where CDSC affiliates work), or as a paid position (not Federal work-study). For paid positions, compensation will be $15 per hour. Some funding may be restricted to current undergraduate students (at any institution), which may impact hiring decisions.

Questions and/or applications should be sent to Professor Aaron Shaw. Work-study eligible Northwestern University students should indicate this in their cover letter. Applications will be reviewed by Professor Shaw and current CDSC-NU team members on a rolling basis and finalists will be contacted for an interview.

The CDSC strives to be an inclusive and accessible research community. We particularly welcome applications from members of groups historically underrepresented in computing and/or data sciences. Some of these positions funded through a U.S. National Science Foundation Research Experience for Undergraduates (REU) supplement to awards numbers: IIS-1910202 and IIS-1617468.

COVID-19 Digital Observatory awarded Open Innovation Grant from Protocol Labs Research

Last week, Protocol Labs Research announced their COVID-19 Open Innovation Grant recipients and we thrilled to announce that the Community Data Science Collective’s COVID-19 Digital Observatory is among the awarded projects!

Protocol Labs works to improve internet technologies through open source protocols, systems, and tools. The organization initially grew out of efforts to apply blockchain tools to support distributed file sharing infrastructure. Their research group, Protocol Labs Research, created the COVID-19 Open Innovation Grants program “to surface and support open-source projects working on tools to help humanity through present and future pandemics.”

Among the ten projects supported under the program, others aim to develop open source medical devices (such as an origami respirator!), contact tracing infrastructure, device development and testing, and engineering collaboration. We feel grateful and humbled to be in the company of these diverse efforts to apply open collaboration to the response to COVID-19!

In the case of the COVID-19 Digital Observatory, we plan  to use the funds provided by the award to build out the resources we have already started to aggregate and release. In particular, we will build additional infrastructure to process and archive data from Reddit and other social media sources as well as search engine results pages (SERPs) for COVID-related queries.

In addition to folks in the collective, the proposal was successful through the efforts of Jason Baumgartner from Pushshift, who is co-leading the observatory work, as well as Marysia Galent, Research Administrator at Northwestern University, whose expert guidance helped make the grant application possible.

Teaching introduction to statistics and statistical computing

Why learning some statistics and some data visualization matters. (Gif from Matejka and Fitzmaurice, “Same stats, different graphs”, CHI 2017: https://dl.acm.org/citation.cfm?id=3025912)

I taught a graduate-level introduction to applied statistics and statistical computing this past Spring. The course design iterated on a class Mako developed in 2017. Very nearly all of the course materials are available open access through the Community Data Science Collective wiki and I wanted to make sure to share them more widely with this post. I’ve also been reflecting a bit on how the course went and thought I’d share those thoughts here in case anyone wants to adopt the course in the future.

First off, the course uses the OpenIntro Statistics (3rd edition) textbook as the core of the course readings and assignments. If you’re not familiar with OpenIntro and you want to learn or teach applied statistics from a general, social scientific perspective, you should check it out! All of the data, code, and LaTeX used to produce the textbook is licensed freely for reuse and the site also hosts video lectures, lecture notes, homework assignments, a discussion forum and more.

Alongside the OpenIntro materials, I worked together with Jeremy Foote (who was the TA for the course before he left to be new faculty at Purdue) to develop a bunch of tutorials in RMarkdown to help students complete the problem set assignments. We also posted worked solutions to the problem sets (also in RMarkdown). These replicated and expanded on screencasts Mako had recorded for his course.

The classroom sessions focused on discussion and problem solving. Basically, students came to each session knowing that I expected them to have completed the problem sets. I then did my best to answer any questions people had and assigned individuals (in some cases using a randomization script in R to pick names!) to summarize their solutions and approaches to specific problems that seemed important to cover.

It was my first time teaching a course like this and I had a few reflections after completing the quarter and reading through the feedback from students.

A major challenge for a course like this is pitching the material to an appropriate level given that students (in the MTS and TSB programs here at Northwestern at least) arrive with such varied knowledge of the subject matter. I think I did okay on this front in some ways and not in others. It was especially challenging given the semi-flipped classroom approach.

In some weeks, there was just too much material to cover in adequate depth. In some others, I was insufficiently organized and concise to cover everything. Whatever the case, I would cut back a bit next time. (I’ve noticed that this is a common issue for me the first time I teach any class, but I still struggle to correct it.)

Whatever challenges and failures I may have introduced in the design or instruction of the course, the students produced a bunch of highly original and engaging final projects. I’m optimistic that some of these projects will wind up as published work soon. Nothing like brilliant, motivated students to help the professor feel better about his own shortcomings!

Nearly all of the course materials are available on the CDSC wiki. The exceptions are a few of the readings and supplementary materials that I didn’t have the rights or desire to post on the public web. If you’re looking for any of that, feel free to send me an email and I can see if it’s appropriate to share.

Also, OpenIntro just came out with the fourth edition of their statistics textbook! I haven’t had a chance to check it out yet, but I’m eager to see what kinds of changes they introduced.

Adventures in applied data science

Many organizations have unprecedented access to data, experiments, and statistical inference. The diffusion of these resources has created pressure to develop the skills and practices necessary to use them. However, the distribution of these skills and practices has an organizational component, leading some teams and organizations to harness social scientific insights far more effectively than others.

Handy web-based tools like ABBA can make a-b testing more accessible

We hear plenty about examples of “bad” statistics in the news. For example, Brian Wansink and the Cornell Food Lab have gotten a whole lot of attention for problems in their statistical analysis and interpretation. More than sheer ignorance or malfeasance (although there may be some evidence of that too), I think the reproducibility crisis illustrates how pervasive pressure to produce statistical evidence has combined with uneven professional standards can lead to dodgy research.

Our capacity to gather data and apply inferential statistics may have gotten ahead of our collective ability to manage these resources skillfully. In academia, this might lead to publications with spurious findings. In other kinds of environments, it might lead to decisions based on evidence of questionable quality. In both cases organizational resource constraints and communication challenges shape whether, where, and how well data science and statistics get done.

A slightly long story illustrates how this can play out in a non-academic environment, specifically a fairly small technology company. I share the story as a cautionary tale that can hopefully provoke some useful reflection about how we (people who care about evidence-based decision making, data science, statistics, and applied social science) can improve our work. I have de-identified the organization and the individuals involved because this is really not about them per se. The challenges they face are common. I think the story can tell us something interesting about those challenges.

Within the organization, several teams conduct experiments, user tests, and other sorts of data-intensive, social scientific research. One of these teams had reached out because they had some questions about methods of analysis. Within the organization, this particular team had gotten positive feedback for their adoption of a data-driven pipeline of A/B testing, but there were concerns about whether the testing was being done well. I went to visit them planning to do a little bit of informal statistical consulting and to learn more about that part of the organization.

A few team members walked me through a typical field experiment with multiple (about 10) treatment conditions. Everything runs on a small stack of custom scripts that pulled summary data from the platform’s databases. The team uses spreadsheets to record the number of individuals assigned to each condition along with the number of “successful” trials (e.g., cases where an end-user has the desired response to a given design change).

The team then enters the raw summary information into an open source web-based tool called ABBA that runs some calculations and reports a “success rate” (a smoothed percentage) for each trial, a raw and percentage-based confidence interval for the success rate, and a p-value (based on a binomial cumulative distribution function or a normal approximation for large samples). ABBA also presents a handy little visualization plotting the interval estimated for each experimental condition along a bar colored either gray (not different from control), red (lower success rate than control), or green (higher success rate than control) depending on the results of the corresponding hypothesis test. I’ve included a screenshot of what this looks like at the top of the post and you can try it yourself.

Those of you with a statistical background following me into the weeds here might be nodding and thinking “okay, sounds maybe not ideal, but reasonable enough.” While the system puts too much faith in p-values, it follows a pretty standard approach. It’s also a great example of the kind of statistics-as-a-service approach to A/B testing that many organizations have adopted in response to various pressures to be more data driven.

That’s when things started to get weird. As we spoke more, it turned out that the ways members of the team conduct the tests, enter the data, and interpret the results raise major red flags.

For example, they regularly update the number of experimental conditions on-the-fly, dropping old conditions and adding new conditions when others already had thousands of observations (ABBA makes this super easy!).

When experimental conditions are dropped or added, the team routinely re-computes statistical tests and p-values with/without the new/old observations included. Mostly, conditions that do not seem to produce different outcomes from the control were silently removed from the analysis.

For some of the analysis itself, the team uses parametric tests that assume normal distributions on heavily skewed data.

Then, when it comes time to interpret the results, the analysts use the relative magnitude of p-values as an estimate of the magnitude of conditional effect sizes.

At this point, those of you with relevant training in applied statistics, experimental research methods, data science, etc. might be scratching your heads or experiencing full-on panic.

Separately, each of these steps are inferential howlers capable of invalidating results. Together, they render whatever results were coming out of this process untrustworthy in the extreme.

For the rest of the meeting, I did my best to identify a series of steps the team could take to avoid the problems above. But I still walked away disconcerted. This was a technically sophisticated organization with plenty of resources. The team was using a pretty well-designed tool for analyzing experimental data. They had gotten critical feedback on the work they were doing. How did a situation like this happen?

The individuals on the team were doing their best. Nobody is born with deep knowledge of applied statistics. Confronted with a challenging mandate from their supervisors, these people were all doing their absolute best to apply some tools they didn’t fully understand to solve a practical problem. They had generally been told that their work was good, knew they had some issues to fix, and reached out to someone with more knowledge (in this case me) for help.

What about the tools? Can we at least blame the tools? As I mentioned earlier, a bunch of companies are in the business of providing “statistics-as-a-service” or A/B testing platforms, but I’m not convinced that these are the root of the problem either. Sure, ABBA makes some mistakes a little too easy, but the tool was also built and shared by skilled data scientists who painstakingly documented everything before distributing it on GitHub. Their documentation is why I was able to sort out exactly what was happening in the first place and help the team members understand some of the issues involved. Indeed, nothing seems obviously or fundamentally wrong with the implementation of the underlying software or the statistical tests. Instead, the misuse of the system happened despite the software designers’ best efforts.

Here we get into one problem area: the incentives to produce specific kinds of outcomes. The team using the tool needed to run experiments and interpret them as decisive “wins” or “losses.” The reality was much less clear and, in this way, the p-values obscured some of that ambiguity. Imposing a dichotomous logic on experimental evidence is often impossible and will, even under the best conditions, lead to systematic abuses of statistical reasoning.

What about the organizational leadership then? Shouldn’t they be responsible for making sure that the company does high quality data science? On the one hand yes, and on the other hand, this is hard too and understandable problems arise. Executives and managers often lack the requisite statistical expertise to evaluate operations like this in a rigorous way. They have heard, through professional networks, industry publications, media, etc., that more data and more A/B tests are Good Things for their organization. At a certain point, they cannot do the auditing of experimental procedures and inference themselves.

Shouldn’t the managers just make sure someone else can audit the statistics then? This is probably where the most important breakdowns occurred. Turns out that other staff possess all the skills to diagnose and repair the issues I identified (and more). One of these people had even been assigned to work with the team in question for a while! However, that assignment had ended during a restructuring and statistical expertise had never returned to the team. In the meantime, managers continued to demand results without fully appreciating that the existing approach had deep problems.

So given this particular mix of data and organizational sciences gone awry, what lessons can we learn?

The future of data-intensive social science remains, as William Gibson might say, unevenly distributed. As the infrastructure for data collection and analysis has become more widely accessible, the choke-point in many organizations has become the dissemination of deeper knowledge of the techniques necessary to produce valid, reliable inference. These inequalities emerge both within and between organizations. Some companies and some teams have more expertise than others. Some have more effective systems for feedback and improvement than others.

In this sense, organizational (not just technical or statistical) obstacles stand in the way of more effective, accountable, and transparent uses of evidence to make decisions. Web-scale organizations can run 100,000 randomized trials and analyze the results very quickly. The results can look real and have p-values attached and the executives can believe that they have got the whole data science thing nailed down. However, the analysis might not mean much unless it is implemented skillfully.

The inundation of behavioral trace data does not guarantee that we will be similarly inundated by reliable findings, valid inference, or skilled implementation. High quality research design and interpretation may not scale so easily as the data or the analysis tools.

All of this has distributive implications. Organizations with access to the best social scientific knowledge as well as the organizational capacity to deploy and harness that knowledge will be the ones most likely to reap benefits from it. Others, such as many public administrations in the U.S. (especially those that deliver social services), smaller firms, non-profits, and community organizations will likely get inferior inference (to the extent they get any at all).

It takes time and effort to build organizational resources and cultures capable of supporting widespread, high quality, data-driven inference. Some recent work in HCI and related fields speaks to these issues. For example, some folks at CU Boulder have a 2017 CHI paper about how mission-driven organizations can struggle to do data-driven work. In a more interventionist vein, Catherine D’Ignazio and Rahul Barghava have launched the Data Culture Project in an effort to help smaller non-profits and community organizations use data more effectively.

Whatever the organizational context, high quality social scientific and statistical work requires more than just a clear understanding of p-values and massive A/B testing infrastructure. Statistical expertise also needs to be embedded and managed effectively within organizations and teams in order to produce reliable inference.


This is a cross-post from the CASBS Medium channel. Thanks to members of the CDSC, Margaret Levi, and some anonymous friends for feedback on earlier versions of the text.

Peer production between real utopia and naive Coaseanism?

Over at Crooked Timber, Henry Farrell and others recently held a book seminar to discuss Cory Doctorow’s Walkaway. The symposium led to an extended discussion between Henry, Cory, Henry again, and Yochai Benkler about Benkler’s early work on commons-based peer production, spaces of resistance in the contemporary information economy, and the state of peer production a little over fifteen years since Benkler introduced the term. This (far too long) post summarizes some of their key points as a way of starting to collect my own thoughts on these questions.

I haven’t read Walkaway yet (downloaded my DRM-free digital copy, but the fiction slot in my brain is currently occupied by Philip Pullman’s totally engrossing La Belle Sauvage), but I can’t wait to get to it. Cory says the book started as an exercise in projecting how the sociotechnical transformations Benkler laid out in Coase’s Penguin might facilitate the spread of utopian energies at the periphery of radically unequal societies not so different from our own:

It’s been 15 years since Benkler made the connection between “commons-based peer-production” and Coase…

Down and Out in the Magic Kingdom projected Slashdot karma and Napster superdistribution across a whole society as a way of illuminating the strengths and weaknesses of both. Walkaway tries to do the same with commons-based peer-production: what would a skyscraper look like if it was a Wikipedia-style project? How about a space program?

As a Coasean tale, Walkaway is one the battleground between the technological, Promethean left—which has promised to lift peasants up to the material comfort of lords—and the de-growth green left, which promises to bring lords down to the level of the peasants in the name of saving the planet.

and later:

This is (in my view) a Utopian vision. It supposes that the Bohemian projects that even the most buttoned-down societies allow at their margins can breed real discontent and nurture and sustain it into something that genuinely challenges its host… They provided real-world lessons on which tactics worked and where the weaknesses were. They were battles, not the war. The only thing more extraordinary than a social justice prevailing at all is for it to prevail on its first outing, or second, or third.

In his contribution to the seminar, Henry points to Cory’s assumption that “exit” (in Hircshman’s sense) remains viable in a society pervaded by vast power inequalities, surveillance capabilities, and an (increasingly weaponized) disregard for privacy:

Again, Doctorow’s book isn’t an exercise in predictive science – he’s not saying that things will be so. But he is saying, I think, that things could and should be so, or sort-of so. Walkaway is quite unashamedly a didactic book in the way that earlier books such as Homeland were didactic – he has a very clear message to get across. In conversations with Steve Berlin Johnson years ago, I came up with the term BoingBoing Socialism to refer to a specific set of ideas associated with Doctorow and the people around him – that free exchange of ideas unimpeded by intellectual property law and the like, together with transformative technologies of manufacture, could open up a path towards a radically egalitarian future. Unless I’m seriously mistaken (in which case I’m sure that Doctorow will tell me), Walkaway wants to do two things – to argue for why such a future might be attractive, and to suggest that something like this future could be feasible.

For Henry, the implications boil down to questions of power and the role powerful entities play in shaping the lives of even the most peripheral, socially excluded groups within a society. He also (later on) expresses skepticism at the political prospects of the revolutionary vision of “BoingBoing Socialism” that adopts a rhetoric of contingency and self-marginalization as its platform for change.

Ronald Coase. 2003, U of Chicago Law School.

In a followup post, Henry elaborates a claim that Benkler engaged in a sort of naive Coasean disregard for power relations when he laid out the definitional statements on peer production. Henry says Benkler emphasized transaction cost and efficiency-centric explanations for the potential of peer production to substitute for firm or market-based modes of knowledge production and exchange:

Power relationships often explain who gets what, and which forms of organization are taken up, and which fall by the wayside. In general, forms of production that are (a) more efficient, but (b) inconvenient or unprofitable for powerful actors, are probably not going to be taken up, since those powerful actors will block them. Yet if one starts from an efficiency perspective, it is very hard to build power relations in, since one believes that change in practices and institutions is not driven by power relations but by efficiency.

and later:

What this means, if you take it seriously, is that Coaseian coordination is a special case of bargaining. Broadly speaking, Coaseian processes will lead to efficient outcomes only under very specific circumstances – when the actors have symmetrical breakdown values, as in the first game, so that neither of them is able to prevail over the other. More simply put, the Coase transaction cost account of how efficient institutions emerge will only work when all actors are more or less equally powerful. Under these conditions, it is perfectly alright to assume as Coase (and Benkler by extension) do, that efficiency considerations rather than power relations will drive change. In contrast, where there are significant differences of power, actors will converge on the institutions that reflect the preferences of powerful actors, even if those institutions are not the most efficient possible.

and finally:

In short – we need to distinguish between the rhetorical claims that technological change will bring openness along with it, and the (far more sustainable) claim that technology will probably only have openness enhancing benefits in a world where we are already dealing with the underlying power relations.

Benkler responds that Farrell is right to question his (Benkler’s) approach to power, but wrong in that the failure of his (Benkler’s) arguments in Coase’s Penguin and The Wealth of Networks is not driven by naive Coaseanism, but a different dimension of power entirely:

My primary mistake in my work fifteen years ago, and even ten, was not ignoring the role of power in shaping market patterns, but in understating the extent to which the new “market actors who will build the tools that make this population better able…” will themselves become the new incumbent market actors who will shape the environment to increase and lock-in their power. That is certainly a mistake in reading the landscape of power grabs, and I have tried to correct over the intervening years, most recently by offering a map of what has developed in the past decade…

In other words, today’s Benkler argues that yesterday’s Benkler underestimated the adaptive capacities of various incumbent powers as well as the way that a continuously shifting technical, regulatory, and political environment would alter the landscape along the way.

All of this speaks to an ongoing conversation Mako and I have been having about the past, present, and future of peer production. A pessimistic account might run like this: peer production thrived from ~1995-2008 in part because incumbent firms and private actors had not figured out how to capitalize on the possibilities for community-based provision of resources unlocked by the diffusion of digitally networked communications infrastructure. Now that increasing numbers of firms have done so, there is no going back. Large firms as well as their venture-funded spawn will continue to eat peer production communities’ lunch, undermining their viability as well as their autonomy. Peer production as we know it will eventually disappear, becoming a curious relic of a more naive era when the electronic frontier remained an unsettled, experimental space.

Another possibility, arguably more optimistic, can be seen in Benkler and Doctorow’s contributions to this exchange. Rather than consigning peer production to the dustbin of history, they both suggest that room for maneuver (or “degrees of freedom” in Benkler’s terms) will remain at the margins of the networked information economy and that communities of “walkaways” may persist in experimenting with “real utopian” autonomous alternatives to the more extractive, winner-take-all models of “supercapitalist” knowledge production and exchange. Doctorow’s fiction seems to explore the (hopeful) potential of these walkaway communities to generate radical, systematic transformation. Benkler, in his more recent writings, holds out some hope, but of a highly contingent, tenuous, and circumscribed sort.

The original posts are worth a read.