In a recent Nature news issue (9th, March), science reporter Davide Castelvecchi describes Einstein@home, a distributed computing project that analyses astronomical data. Such data are collected by the LIGO project, which hit the headlines in January when the detection of gravitational waves from a black-hole merger was confirmed. Einstein@home searches for signals of gravitational waves coming from other types of astronomical objects, especially fast-spinning neutron stars. Such search is computationally very intensive, and lends itself to distributed computing.
Distributed computing is made possible by the wide availability of processors (i.e. our PCs), their being networked and the fact that the typical user of a PC only uses a fraction of the computing capabilities of her machine. Platforms for distributed computing exploit this processing idle-time for computing-intensive tasks, and especially analytic tasks in big data science. There are several advantages of distributed computing, size being the most important one. Just think about the technical challenge of cooling down, let us say, 10.000 computers piled in one physical location. Many research groups around the world opt for “citizen science” approaches, as in the title of the Nature report, when faced with the computational limitations of their own in-lab computers.
What does distributed computing have to do with citizen science anyway? Seemingly little: this form of volunteerism requires very little from participants, which are not even asked to use their own brain power as it happens instead in gamified tasks. The sense in which volunteering personal computer time counts as doing science is very thin indeed. This is just part of the picture however, and not even the most important.
The fact that major chunks of the infrastructure required to do science – to produce knowledge – is dispersed throughout the population has several desirable features. It firstly means that some non-negligible parts of the means of production of knowledge are controlled by you and me. Individual decision power on research agendas is of course very small, indeed negligible. But collectively the thousands of volunteers that decide to download Einstein@home do vote on what science they want it to happen.
Recently, researches in behavioural psychology and IT sciences have started looking at how incentives can be created to allure participants in citizen science and gamified projects, and keep them contributing once they get in. This is not surprising and indeed it closely mirrors what has happened in the internet more generally. The artisanal and even subversive beginnings were taken over or outnumbered by all sort of projects that nudge people into productive behaviors that have little to do with their being in control on knowledge production (although of course they may have other desirable effects).
It is just reassuring that Einstein@home website does not boast to lounge-like lofty appearance of so many web 2.0 platforms but proudly employs the basic 2005 homepage that ensures that little time was spent by researchers on nudging. And beyond the façade, there is the even sturdier page of BOINC platform, the open-source software for volunteer computing employed by academic research projects around the world.
Open-source software is the paradigm of new forms of production. Together with distributed computing open-source software is indeed a defining part of how production could look like in a networked society. “Citizen science” is then not the name for what happens when you download a screensaver that warns you that your computer is being used by LIGO-scientists, but a broader ideal regarding how knowledge production is changing, including cutting-edge astrophysics.