Shannon’s Ghost

I’m spending the 2018-2019 academic year as a fellow at the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford.

Claude Shannon on a bicycle.

Every CASBS study is labeled with a list of  “ghosts” who previously occupied the study. This year, I’m spending the year in Study 50 where I’m haunted by an incredible cast that includes many people whose scholarship has influenced and inspired me.

The top part of the list of ghosts in Study #50 at CASBS.

Foremost among this group is Study 50’s third occupant: Claude Shannon

At 21 years old, Shannon’s masters thesis (sometimes cited as the most important masters thesis in history) proved that electrical circuits could encode any relationship expressible in Boolean logic and opened the door to digital computing. Incredibly, this is almost never cited as Shannon’s most important contribution. That came in 1948 when he published a paper titled A Mathematical Theory of Communication which effectively created the field of information theory. Less than a decade after its publication, Aleksandr Khinchin (the mathematician behind my favorite mathematical constant) described the paper saying:

Rarely does it happen in mathematics that a new discipline achieves the character of a mature and developed scientific theory in the first investigation devoted to it…So it was with information theory after the work of Shannon.

As someone whose own research is seeking to advance computation and mathematical study of communication, I find it incredibly propitious to be sharing a study with Shannon.

Although I teach in a communication department, I know Shannon from my background in computing. I’ve always found it curious that, despite the fact Shannon’s 1948 paper is almost certainly the most important single thing ever published with the word “communication” in its title, Shannon is rarely taught in communication curricula is sometimes completely unknown to communication scholars.

In this regard, I’ve thought a lot about this passage in Robert’s Craig’s  influential article “Communication Theory as a Field” which argued:

In establishing itself under the banner of communication, the discipline staked an academic claim to the entire field of communication theory and research—a very big claim indeed, since communication had already been widely studied and theorized. Peters writes that communication research became “an intellectual Taiwan-claiming to be all of China when, in fact, it was isolated on a small island” (p. 545). Perhaps the most egregious case involved Shannon’s mathematical theory of information (Shannon & Weaver, 1948), which communication scholars touted as evidence of their field’s potential scientific status even though they had nothing whatever to do with creating it, often poorly understood it, and seldom found any real use for it in their research.

In preparation for moving into Study 50, I read a new biography of Shannon by Jimmy Soni and Rob Goodman and was excited to find that Craig—although accurately describing many communication scholars’ lack of familiarity—almost certainly understated the importance of Shannon to communication scholarship.

For example, the book form of Shannon’s 1948 article was published by University Illinois on the urging of and editorial supervision of Wilbur Schramm (one of the founders of modern mass communication scholarship) who was a major proponent of Shannon’s work. Everett Rogers (another giant in communication) devotes a chapter of his “History of Communication Studies”² to Shannon and to tracing his impact in communication. Both Schramm and Rogers built on Shannon in parts of their own work. Shannon has had an enormous impact, it turns out, in several subareas of communication research (e.g., attempts to model communication processes).

Although I find these connections exciting. My own research—like most of the rest of communication—is far from the substance of technical communication processes at the center of Shannon’s own work. In this sense, it can be a challenge to explain to my colleagues in communication—and to my fellow CASBS fellows—why I’m so excited to be sharing a space with Shannon this year.

Upon reflection, I think it boils down to two reasons:

  1. Shannon’s work is both mathematically beautiful and incredibly useful. His seminal 1948 article points to concrete ways that his theory can be useful in communication engineering including in compression, error correcting codes, and cryptography. Shannon’s focus on research that pushes forward the most basic type of basic research while remaining dedicated to developing solutions to real problems is a rare trait that I want to feature in my own scholarship.
  2. Shannon was incredibly playful. Shannon played games, juggled constantly, and was always seeking to teach others to do so. He tinkered, rode unicycles, built a flame-throwing trumpet, and so on. With Marvin Minsky, he invented the “ultimate machine”—a machine that’s only function is to turn itself off—which he kept on his desk.

    A version of the Shannon’s “ultimate machine” that is sitting on my desk at CASBS.

I have no misapprehension that I will accomplish anything like Shannon’s greatest intellectual achievements during my year at CASBS. I do hope to be inspired by Shannon’s creativity, focus on impact, and playfulness. In my own little ways, I hope to build something at CASBS that will advance mathematical and computational theory in communication in ways that Shannon might have appreciated.


  1. Incredibly, the year that Shannon was in Study 50, his neighbor in Study 51 was Milton Friedman. Two thoughts: (i) Can you imagine?! (ii) I definitely chose the right study!
  2. Rogers book was written, I found out, during his own stint at CASBS. Alas, it was not written in Study 50.

This post was also published on Benjamin Mako Hill’s blog.

Sayamindu Dasgupta Joining the University of North Carolina Faculty

Sayamindu Dasgupta head shotThe School of Information and Library Sciences (SILS) at the University of North Carolina in Chapel Hill announced this week that the Community Data Science Collective’s very own Sayamindu Dasgupta will be joining their faculty as a tenure-track assistant professor. The announcement from SILS has much more detail and makes very clear that UNC is thrilled to have him join their faculty.

UNC has has every reason to be excited. Sayamindu has been making our research collective look good for several years. Much of this is obvious in the pile of papers and awards he’ s built. In less visible roles, Sayamindu has helped us build infrastructure, mentored graduate and undergraduate students in the group, and has basically just been  joy to have around.

Those of us that work in the Community Data Lab at UW is going to miss having Sayamindu around. Chapel Hill is very, very lucky to have him.

Community Data Science Collective at ICA 2018 in Prague

Jeremy Foote, Nate TeBlunthuis, and Mako Hill are in Prague this week for the  International Communication Association’s 2018 annual meeting.

ICA 2018 (Prague)

The collective has three things on the conference program this year:

  • Fri, May 25, 9:30 to 10:45, Hilton Prague, LL, Vienna: An Agent-Based Model of Online Community Joining as part of the Computational Methods section paper session on “Agent-Based Modeling for Communication Research” — Jeremy Foote (presenting), Benjamin Mako Hill and Nathan TeBlunthuis
  • Fri, May 25, 12:30 to 13:45, Hilton Prague, LL, Congress Hall II – Exhibit Hall/Posters: Revisiting ‘The Rise and Decline’ in a Population of Peer Production Projects as part of the Information Systems section’s poster session “ICA Interactive Paper/Poster Session I” —Nathan TeBlunthuis (presenting), Aaron Shaw, and Benjamin Mako Hill
  • Mon, May 28, 9:30 to 10:45, Hilton Prague, M, Palmovka: Theory Building Beyond Communities: Population-Level Research in the Computational Methods section’s panel on “Communication in the Networked Age: A Discussion of Theory Building through Data-Driven Research” — Benjamin Mako Hill (presenting) and Aaron Shaw

We look forward to sharing our research and socializing with you at ICA! Please be in touch if you’re around and want to meet up!

Open Lab at the University of Washington

If you are at the University of Washington (or not at UW but in Seattle) and are interested in seeing what we’re up to, you can join us for a Community Data Science Collective “open lab” this Friday (April 6th) 3-5pm in our new lab space (CMU 306). Collective members from Northwestern University will be in town as well, so there’s even more reason to come!

The open lab is an opportunity to learn about our research, catch up over snacks and beverages, and pick up a sticker or two. We will have no presentations but several posters describing projects we are working on.

OpenSym 2017 Program Postmortem

The International Symposium on Open Collaboration (OpenSym, formerly WikiSym) is the premier academic venue exclusively focused on scholarly research into open collaboration. OpenSym is an ACM conference which means that, like conferences in computer science, it’s really more like a journal that gets published once a year than it is like most social science conferences. The “journal”, in this case, is called the Proceedings of the International Symposium on Open Collaboration and it consists of final copies of papers which are typically also presented at the conference. Like journal articles, papers that are published in the proceedings are not typically published elsewhere.

Along with Claudia Müller-Birn from the Freie Universtät Berlin, I served as the Program Chair for OpenSym 2017. For the social scientists reading this, the role of program chair is similar to being an editor for a journal. My job was not to organize keynotes or logistics at the conference—that is the job of the General Chair. Indeed, in the end I didn’t even attend the conference! Along with Claudia, my role as Program Chair was to recruit submissions, recruit reviewers, coordinate and manage the review process, make final decisions on papers, and ensure that everything makes it into the published proceedings in good shape.

In OpenSym 2017, we made several changes to the way the conference has been run:

  • In previous years, OpenSym had tracks on topics like free/open source software, wikis, open innovation, open education, and so on. In 2017, we used a single track model.
  • Because we eliminated tracks, we also eliminated track-level chairs. Instead, we appointed Associate Chairs or ACs.
  • We eliminated page limits and the distinction between full papers and notes.
  • We allowed authors to write rebuttals before reviews were finalized. Reviewers and ACs were allowed to modify their reviews and decisions based on rebuttals.
  • To assist in assigning papers to ACs and reviewers, we made extensive use of bidding. This means we had to recruit the pool of reviewers before papers were submitted.

Although each of these things have been tried in other conferences, or even piloted within individual tracks in OpenSym, all were new to OpenSym in general.

Overview

Statistics
Papers submitted 44
Papers accepted 20
Acceptance rate 45%
Posters submitted 2
Posters presented 9
Associate Chairs 8
PC Members 59
Authors 108
Author countries 20

The program was similar in size to the ones in the last 2-3 years in terms of the number of submissions. OpenSym is a small but mature and stable venue for research on open collaboration. This year was also similar, although slightly more competitive, in terms of the conference acceptance rate (45%—it had been slightly above 50% in previous years).

As in recent years, there were more posters presented than submitted because the PC found that some rejected work, although not ready to be published in the proceedings, was promising and advanced enough to be presented as a poster at the conference. Authors of posters submitted 4-page extended abstracts for their projects which were published in a “Companion to the Proceedings.”

Topics

Over the years, OpenSym has established a clear set of niches. Although we eliminated tracks, we asked authors to choose from a set of categories when submitting their work. These categories are similar to the tracks at OpenSym 2016. Interestingly, a number of authors selected more than one category. This would have led to difficult decisions in the old track-based system.

distribution of papers across topics with breakdown by accept/poster/reject

The figure above shows a breakdown of papers in terms of these categories as well as indicators of how many papers in each group were accepted. Papers in multiple categories are counted multiple times. Research on FLOSS and Wikimedia/Wikipedia continue to make up a sizable chunk of OpenSym’s submissions and publications. That said, these now make up a minority of total submissions. Although Wikipedia and Wikimedia research made up a smaller proportion of the submission pool, it was accepted at a higher rate. Also notable is the fact that 2017 saw an uptick in the number of papers on open innovation. I suspect this was due, at least in part, to work by the General Chair Lorraine Morgan’s involvement (she specializes in that area). Somewhat surprisingly to me, we had a number of submission about Bitcoin and blockchains. These are natural areas of growth for OpenSym but have never been a big part of work in our community in the past.

Scores and Reviews

As in previous years, review was single blind in that reviewers’ identities are hidden but authors identities are not. Each paper received between 3 and 4 reviews plus a metareview by the Associate Chair assigned to the paper. All papers received 3 reviews but ACs were encouraged to call in a 4th reviewer at any point in the process. In addition to the text of the reviews, we used a -3 to +3 scoring system where papers that are seen as borderline will be scored as 0. Reviewers scored papers using full-point increments.

scores for each paper submitted to opensym 2017: average, distribution, etc

The figure above shows scores for each paper submitted. The vertical grey lines reflect the distribution of scores where the minimum and maximum scores for each paper are the ends of the lines. The colored dots show the arithmetic mean for each score (unweighted by reviewer confidence). Colors show whether the papers were accepted, rejected, or presented as a poster. It’s important to keep in mind that two papers were submitted as posters.

Although Associate Chairs made the final decisions on a case-by-case basis, every paper that had an average score of less than 0 (the horizontal orange line) was rejected or presented as a poster and most (but not all) papers with positive average scores were accepted. Although a positive average score seemed to be a requirement for publication, negative individual scores weren’t necessary showstoppers. We accepted 6 papers with at least one negative score. We ultimately accepted 20 papers—45% of those submitted.

Rebuttals

This was the first time that OpenSym used a rebuttal or author response and we are thrilled with how it went. Although they were entirely optional, almost every team of authors used it! Authors of 40 of our 46 submissions (87%!) submitted rebuttals.

Lower Unchanged Higher
6 24 10

The table above shows how average scores changed after authors submitted rebuttals. The table shows that rebuttals’ effect was typically neutral or positive. Most average scores stayed the same but nearly two times as many average scores increased as decreased in the post-rebuttal period. We hope that this made the process feel more fair for authors and I feel, having read them all, that it led to improvements in the quality of final papers.

Page Lengths

In previous years, OpenSym followed most other venues in computer science by allowing submission of two kinds of papers: full papers which could be up to 10 pages long and short papers which could be up to 4. Following some other conferences, we eliminated page limits altogether. This is the text we used in the OpenSym 2017 CFP:

There is no minimum or maximum length for submitted papers. Rather, reviewers will be instructed to weigh the contribution of a paper relative to its length. Papers should report research thoroughly but succinctly: brevity is a virtue. A typical length of a “long research paper” is 10 pages (formerly the maximum length limit and the limit on OpenSym tracks), but may be shorter if the contribution can be described and supported in fewer pages— shorter, more focused papers (called “short research papers” previously) are encouraged and will be reviewed like any other paper. While we will review papers longer than 10 pages, the contribution must warrant the extra length. Reviewers will be instructed to reject papers whose length is incommensurate with the size of their contribution.

The following graph shows the distribution of page lengths across papers in our final program.

histogram of paper lengths for final accepted papersIn the end 3 of 20 published papers (15%) were over 10 pages. More surprisingly, 11 of the accepted papers (55%) were below the old 10-page limit. Fears that some have expressed that page limits are the only thing keeping OpenSym from publshing enormous rambling manuscripts seems to be unwarranted—at least so far.

Bidding

Although, I won’t post any analysis or graphs, bidding worked well. With only two exceptions, every single assigned review was to someone who had bid “yes” or “maybe” for the paper in question and the vast majority went to people that had bid “yes.” However, this comes with one major proviso: people that did not bid at all were marked as “maybe” for every single paper.

Given a reviewer pool whose diversity of expertise matches that in your pool of authors, bidding works fantastically. But everybody needs to bid. The only problems with reviewers we had were with people that had failed to bid. It might be reviewers who don’t bid are less committed to the conference, more overextended, more likely to drop things in general, etc. It might also be that reviewers who fail to bid get poor matches which cause them to become less interested, willing, or able to do their reviews well and on time.

Having used bidding twice as chair or track-chair, my sense is that bidding is a fantastic thing to incorporate into any conference review process. The major limitations are that you need to build a program committee (PC) before the conference (rather than finding the perfect reviewers for specific papers) and you have to find ways to incentivize or communicate the importance of getting your PC members to bid.

Conclusions

The final results were a fantastic collection of published papers. Of course, it couldn’t have been possible without the huge collection of conference chairs, associate chairs, program committee members, external reviewers, and staff supporters.

Although we tried quite a lot of new things, my sense is that nothing we changed made things worse and many changes made things smoother or better. Although I’m not directly involved in organizing OpenSym 2018, I am on the OpenSym steering committee. My sense is that most of the changes we made are going to be carried over this year.

Finally, it’s also been announced that OpenSym 2018 will be in Paris on August 22-24. The call for papers should be out soon and the OpenSym 2018 paper deadline has already been announced as March 15, 2018. You should consider submitting! I hope to see you in Paris!

This Analysis

OpenSym used the gratis version of EasyChair to manage the conference which doesn’t allow chairs to export data. As a result, data used in this this postmortem was scraped from EasyChair using two Python scripts. Numbers and graphs were created using a knitr file that combines R visualization and analysis code with markdown to create the HTML directly from the datasets. I’ve made all the code I used to produce this analysis available in this git repository. I hope someone else finds it useful. Because the data contains sensitive information on the review process, I’m not publishing the data.

OpenSym 2017 Program Published

A few hours ago, OpenSym 2017 kicked off in Galway. For those that don’t know, OpenSym is the International Symposium on Wikis and Open Collaboration (it was called WikiSym until 2014). Its the premier academic venue focused on research on wikis, open collboration, and peer production.

This year, Claudia Müller-Birn and I served as co-chairs of the academic program. Acting as program chair for an ACM conference like OpenSym is more like being a journal editor than a conference organizer. Claudia and I drafted and publicized a call for papers, recruited Associate Chairs and members of a program committee who would review papers and make decisions, coordinated reviews and final decisions, elicited author responses, sent tons of email to notify everybody about everything, and dealt with problems as they came up. It was a lot of work! With the schedule set, and the proceedings now online, our job is officially over!

OpenSym reviewed 43 papers this year and accepted 20 giving the conference a 46.5% acceptance rate. This is similar to both the number of submissions and the acceptance rates for previous years.

In addition to papers, we received 3 extended abstracts for posters for the academic program and accepted 1. There were an additional 7 promising papers that were not accepted but whose authors were invited to present posters and who will be doing so at the conference. The authors of posters will have extended abstracted about their posters published in the non-archival companion proceedings.

The list of papers being published and presented at OpenSym includes:

The following extended abstracts for posters will be published in the companion to the proceedings:

There was also a doctoral consortium and a non-academic ”industry track” which Claudia and I weren’t involved in coordinating.

As part of running the program, we tried a bunch of new things this year including:

  • A move away from separate tracks back to a singlec combined model with Associate Chairs.
  • Bidding for papers among both Associate Chairs and normal PC members.
  • An author rebuttal/response period where authors got to respond to reviews and reviewers.
  • An elimination of page limits for papers. This meant that the category of notes also disappeared. Reviewers were instructed to evaluate the degree to which papers’ contributions were commensurate to their length.

I’m working on a longer post that will evaluate these changes. Until then, enjoy Galway if you were lucky enough to be there. If you couldn’t make it, enjoy the proceedings online!

You can learn more about OpenSym on it’s Wikipedia article on the OpenSym website. You can find details on the schedule and the program itself at its temporary home on the OpenSym website. I’ll update this page with a link to the ACM Digital Library page when it gets posted.

The Community Data Science Collective Dataverse

I’m pleased to announce the Community Data Science Collective Dataverse. Our dataverse is an archival repository for datasets created by the Community Data Science Collective. The dataverse won’t replace work that collective members have been doing for years to document and distribute data from our research. What we hope it will do is get our data — like our published manuscripts — into the hands of folks in the “forever” business.

Over the past few years, the Community Data Science Collective has published several papers where an important part of the contribution is a dataset. These include:

Recently, we’ve also begun producing replication datasets to go alongside our empirical papers. So far, this includes:

In the case of each of the first groups of papers where the dataset was a part of the contribution, we uploaded code and data to a website we’ve created. Of course, even if we do a wonderful job of keeping these websites maintained over time, eventually, our research group will cease to exist. When that happens, the data will eventually disappear as well.

The text of our papers will be maintained long after we’re gone in the journal or conference proceedings’ publisher’s archival storage and in our universities’ institutional archives. But what about the data? Since the data is a core part — perhaps the core part — of the contribution of these papers, the data should be archived permanently as well.

Toward that end, our group has created a dataverse. Our dataverse is a repository within the Harvard Dataverse where we have been uploading archival copies of datasets over the last six months. All five of the papers described above are uploaded already. The Scratch dataset, due to access control restrictions, isn’t listed on the main page but it’s online on the site. Moving forward, we’ll be populating this new datasets we create as well as replication datasets for our future empirical papers. We’re currently preparing several more.

The primary point of the CDSC Dataverse is not to provide you with way to get our data although you’re certainly welcome to use it that way and it might help make some of it more discoverable. The websites we’ve created (like for the ones for redirects and for page protection) will continue to exist and be maintained. The Dataverse is insurance for if, and when, those websites go down to ensure that our data will still be accessible.


This post was also published on Benjamin Mako Hill’s blog Copyrighteous.

Roundup: Community Data Science Collective at CHI 2017

The Community Data Science Collective had an excellent week showing off our stuff at CHI 2017 in Denver last week. The collective presented three papers. If you didn’t make it Denver, or if just missed our presentations, blog post summaries of the papers — plus the papers themselves — are all online:

Additionally, Sayamindu Dasgupta’s “Scratch Community Blocks” paper — adapted from his dissertation work at MIT — received a best paper honorable mention award.

All three papers were published as open access so enjoy downloading and sharing the papers!

Introducing the Cannabis Data Science Collective

In 2012, Washington State became one of the first two US states to legalize cannabis for non-medical use. Since then, sales tax revenues from the “green economy” have flooded state coffers. Washington’s academic institutions have been elevated by that rising tide. The University of Washington (one of our research group’s two institutional homes) is now home to pot-focused grants from UW’s Center for Cannabis Research and the UW Law School’s Cannabis Law and Policy Project.

Today, our research group — formerly known as the “Community Data Science Collective” — announces that we too will be raiding that pantry to satisfy our own munchies.  Toward that end, we have changed our name to the Cannabis Data Science Collective. We’ll still be the CDSC, but we’re changing our logo to match our new focus.

The CDSC’s new logo!

Our research will leverage our existing expertise in studying the chronic challenges faced by online communities, peer production, and social computing. We plan to blaze ahead on this path to greener pastures.

Although we’re still in the early days of this new research focus, our group has started a work on series of projects related to cannabis, communication, and social computing. The preliminary titles below are a bit half-baked, but will give you a whiff of what’s to come:

  • Altered state: Mobile device usage on public university campuses before and after marijuana legalization
  • A tale of two edibles: Automated polysemy detection and the stevia/sativa controversy
  • Best buds: Online friendship formation and recreational drug use
  • Bing bong: The effect of legalization on Microsoft’s search results
  • Blunt truths: The effect of the joint probability distribution on community participation
  • Dank memes: The role of viral social media in marijuana legalization
  • Decision trees: The role of deliberation in governance of a marijuana sub-Reddit
  • The Effects of cannabis on word usage: An analysis of Wikipedia articles pre/post pot legalization
  • Fully baked: Evidence of the importance of completing institutionalized socialization practices from an online cannabis community
  • Ganja rep: A novel approach to managing identity on the World Weed Web
  • Hashtags: Bottom-up organization of the marijuana-focused Internet public sphere
  • Higher calling: Marijuana use and altruistic behavior online
  • Joint custody: Overcoming territoriality with shared ownership of work products in a collaborative cannabis community
  • Pass the piece: Hardware design and social exchange norms in synchronous marijuana-sharing communities
  • Pipe dreams: Fan fiction and the imagined futures of the marijuana legalization movement
  • Sticky icky: Keeping order with pinned messages on an online marijuana discussion board
  • Turn on, tune in, drop out: Wikipedia participation rates following marijuana legalization
  • Weed and Wikipedia: Marijuana legalization and public goods participation
  • World Weed Web: A look at the global conversation before and after half of the United States decriminalized

We planned to post this announcement about three weeks ago but our efforts were blunted by a series of events outside our control. We figured it was high time to make the announcement today!

 

New Dataset: Five Years of Longitudinal Data from Scratch

Scratch is a block-based programming language created by the Lifelong Kindergarten Group (LLK) at the MIT Media Lab. Scratch gives kids the power to use programming to create their own interactive animations and computer games. Since 2007, the online community that allows Scratch programmers to share, remix, and socialize around their projects has drawn more than 16 million users who have shared nearly 20 million projects and more than 100 million comments. It is one of the most popular ways for kids to learn programming and among the larger online communities for kids in general.

Front page of the Scratch online community (https://scratch.mit.edu) during the period covered by the dataset.

Since 2010, I have published a series of papers using quantitative data collected from the database behind the Scratch online community. As the source of data for many of my first quantitative and data scientific papers, it’s not a major exaggeration to say that I have built my academic career on the dataset.

I was able to do this work because I happened to be doing my masters in a research group that shared a physical space (“The Cube”) with LLK and because I was friends with Andrés Monroy-Hernández, who started in my masters cohort at the Media Lab. A year or so after we met, Andrés conceived of the Scratch online community and created the first version for his masters thesis project. Because I was at MIT and because I knew the right people, I was able to get added to the IRB protocols and jump through the hoops necessary to get access to the database.

Over the years, Andrés and I have heard over and over, in conversation and in reviews of our papers, that we were privileged to have access to such a rich dataset. More than three years ago, Andrés and I began trying to figure out how we might broaden this access. Andrés had the idea of taking advantage of the launch of Scratch 2.0 in 2013 to focus on trying to release the first five years of Scratch 1.x online community data (March 2007 through March 2012) — most of the period that the codebase he had written ran the site.

After more work than I have put into any single research paper or project, Andrés and I have published a data descriptor in Nature’s new journal Scientific Data. This means that the data is now accessible to other researchers. The data includes five years of detailed longitudinal data organized in 32 tables with information drawn from more than 1 million Scratch users, nearly 2 million Scratch projects, more than 10 million comments, more than 30 million visits to Scratch projects, and much more. The dataset includes metadata on user behavior as well the full source code for every project. Alongside the data is the source code for all of the software that ran the website and that users used to create the projects as well as the code used to produce the dataset we’ve released.

Releasing the dataset was a complicated process. First, we had navigate important ethical concerns about the the impact that a release of any data might have on Scratch’s users. Toward that end, we worked closely with the Scratch team and the the ethics board at MIT to design a protocol for the release that balanced these risks with the benefit of a release. The most important features of our approach in this regard is that the dataset we’re releasing is limited to only public data. Although the data is public, we understand that computational access to data is different in important ways to access via a browser or API. As a result, we’re requiring anybody interested in the data to tell us who they are and agree to a detailed usage agreement. The Scratch team will vet these applicants. Although we’re worried that this creates a barrier to access, we think this approach strikes a reasonable balance.

Beyond the the social and ethical issues, creating the dataset was an enormous task. Andrés and I spent Sunday afternoons over much of the last three years going column-by-column through the MySQL database that ran Scratch. We looked through the source code and the version control system to figure out how the data was created. We spent an enormous amount of time trying to figure out which columns and rows were public. Most of our work went into creating detailed codebooks and documentation that we hope makes the process of using this data much easier for others (the data descriptor is just a brief overview of what’s available). Serializing some of the larger tables took days of computer time.

In this process, we had a huge amount of help from many others including an enormous amount of time and support from Mitch Resnick, Natalie Rusk, Sayamindu Dasgupta, and Benjamin Berg at MIT as well as from many other on the Scratch Team. We also had an enormous amount of feedback from a group of a couple dozen researchers who tested the release as well as others who helped us work through through the technical, social, and ethical challenges. The National Science Foundation funded both my work on the project and the creation of Scratch itself.

Because access to data has been limited, there has been less research on Scratch than the importance of the system warrants. We hope our work will change this. We can imagine studies using the dataset by scholars in communication, computer science, education, sociology, network science, and beyond. We’re hoping that by opening up this dataset to others, scholars with different interests, different questions, and in different fields can benefit in the way that Andrés and I have. I suspect that there are other careers waiting to be made with this dataset and I’m excited by the prospect of watching those careers develop.

You can find out more about the dataset, and how to apply for access, by reading the data descriptor on Nature’s website.

The paper and work this post describes is collaborative work with Andrés Monroy-Hernández. The paper is released as open access so anyone can read the entire paper here. This blog post was also posted on Benjamin Mako Hill’s blog.