Lorenzo Del Savio, Barbara Prainsack, Alena Buyx


The participation of non-professionally trained people in so-called citizen science (CS) projects is a much discussed topic at the moment. Frequently, however, the contribution of citizens is limited to only a few narrow tasks. Focusing on an initiative dedicated to the study of the human microbiome, this paper describes such a case where citizen participation is limited to the provision of funding, samples, and personal data. Researchers opted for crowdsourced approaches because other forms of funding and recruitment did not seem feasible. We argue that despite the narrow understanding of participation in the context of some CS projects, they can address some of the democratic concerns related to scientific knowledge creation. For example, CS and crowdsourcing can help to foster dialogue between researchers and publics, and increase the influence of citizens on research agenda setting.

Working Paper: Opening the black box of participation in medicine and healthcare.

Our working paper for the Institute of Technology Assessment is online!

Del Savio, Lorenzo; Buyx, Alena; Prainsack, Barbara (2016) Opening the black box of participation in medicine and healthcare. ITA-manu:script 16-01.

ABSTRACT. This paper unpacks the notion of public and patient “participation” in medicine and healthcare. It does so by reviewing a series of papers published in the British Medical Journal, and by discussing these in the light of scholarship on participation in political and social theory. We find that appeals to public participation in this series are based on a diverse, potentially contradictory, set of values and motivations. We argue that if these diverse values and motivations are not carefully distinguished, appeals to participation can be an impediment, rather than an enhancement, to greater transparency and public accountability of health research.



An Asilomar for DIY biology and the shadow cabinet of science

In February 1975 biologist Paul Berg convened a meeting in Asilomar (US) to discuss with fellow biologists the biohazards of the newly found recombinant DNA technique. They agreed to employ particular standards of containment in their recombinant DNA experiments and banned a number of experiments that were deemed to be particularly hazardous. Most importantly, by issuing recommendations to the scientific community, Asilomar scientists deflected part of the public concerns regarding DNA manipulation, possibly avoiding restrictive regulation, and set self-restraint as a major regulatory option in the governance of biosciences.

References to Asilomar have multiplied ever since the CRISPR-cas9 system dramatically enhanced genome editing capabilities, leading to substantial ameliorations in research conducted on genetically modified cell strains or organisms. Genome editing has fuelled again a debate on the responsibility of science, and the limits of research. But the context of biomedical research has changed profoundly, with ever deeper entanglement between industries and academia, rising private funding, and the emergence of biotech powerhouses outside the perimeter of the American and Euro-American science of the post-war.

A novel actor in the research landscape that was absent back in 1975 is the Do-It-Yourself biology community. One is tempted to suppose that technological advancements move scientific research further away from non professional researchers. Research hardware becomes more expensive and hence it is monopolized by few, big, research centres. And indeed this might have been one of the drivers of the professionalization of research in the XX century, when it was no longer possible to do cutting-edge research within the limited budget of (rich) households. There is however a reverse side to this trend: any gain in technological efficiency makes technology more affordable. Think about this: your laptop has computing capabilities that match those of the best research centres worldwide only very few decades ago. This is the same process that allows amateurs to tinker with genomes in their own garage-labs (as the mythology surrounding DIY research has it).

DIY biology has attracted the attention of regulators, and even the professional research community, on the basis of safety concerns. In a recent issue of Nature, biomedical scientist Todd Kuiken argued however that DIY research communities are ahead of science in terms of self-restraint. DIY communities have already convened their own Asilomar – actually an iterative exercise in deliberation that produced a code of conduct. Obviously, rogue individuals cannot be stopped by declarations of self-restraint issued by self-appointed representatives of the DIY community. But this is exactly what might happen in the professional research community as well. And indeed reputation and name-and-shame practices make this Asilomar of citizen scientist a realistic tool for hazard governance of DIY biosciences (the very same comment exemplifies this practice as it condemn a particular DIY project that does not meet community standards).

The development of DYI into a semi-professionalized and structured independent community is not surprising. Alessandro Delfanti has argued that one of the cultural roots of biohacking is precisely the desire to live up the standards of “Mertonian” science in an era where big money has entered the picture and threatened the purity of research conduct and aspirations. If that is the case, we can add that one important social role of DIY communities is that of being a “shadow cabinet” of science of a sort: with no budget but considerable leverage to steer official research by doing better than it does. Yet this role requires structure, and a minimal similarity to “official” science. The Asilomar of citizen science does just that. At the same time, the limitations of a Mertonian model of science – and indeed of the Asilomar conference as well – may be inherited by citizen science, and chiefly the ideology of purity and the deceptive seclusion of science from society that this ideology generates. This would be an ironic fate for a movement that has picked up the banner of scientific citizenship to promote its ideals. But of course, there is a certain trade off between being a shadow cabinet of science and being a place where every curious can just walk in.

Distributed computing and citizen science

In a recent Nature news issue (9th, March), science reporter Davide Castelvecchi describes Einstein@home, a distributed computing project that analyses astronomical data. Such data are collected by the LIGO project, which hit the headlines in January when the detection of gravitational waves from a black-hole merger was confirmed. Einstein@home searches for signals of gravitational waves coming from other types of astronomical objects, especially  fast-spinning neutron stars. Such search is computationally very intensive, and lends itself to distributed computing.


Distributed computing is made possible by the wide availability of processors (i.e. our PCs), their being networked and the fact that the typical user of a PC only uses a fraction of the computing capabilities of her machine. Platforms for distributed computing exploit this processing idle-time for computing-intensive tasks, and especially analytic tasks in big data science. There are several advantages of distributed computing, size being the most important one. Just think about the technical challenge of cooling down, let us say, 10.000 computers piled in one physical location. Many research groups around the world opt for “citizen science” approaches, as in the title of the Nature report, when faced with the computational limitations of their own in-lab computers.


What does distributed computing have to do with citizen science anyway? Seemingly little: this form of volunteerism requires very little from participants, which are not even asked to use their own brain power as it happens instead in gamified tasks. The sense in which volunteering personal computer time counts as doing science is very thin indeed. This is just part of the picture however, and not even the most important.


The fact that major chunks of the infrastructure required to do science – to produce knowledge – is dispersed throughout the population has several desirable features. It firstly means that some non-negligible parts of the means of production of knowledge are controlled by you and me. Individual decision power on research agendas is of course very small, indeed negligible. But collectively the thousands of volunteers that decide to download Einstein@home do vote on what science they want it to happen.


Recently, researches in behavioural psychology and IT sciences have started looking at how incentives can be created to allure participants in citizen science and gamified projects, and keep them contributing once they get in. This is not surprising and indeed it closely mirrors what has happened in the internet more generally. The artisanal and even subversive beginnings were taken over or outnumbered by all sort of projects that nudge people into productive behaviors that have little to do with their being in control on knowledge production (although of course they may have other desirable effects).


It is just reassuring that Einstein@home website does not boast to lounge-like lofty appearance of so many web 2.0 platforms but proudly employs the basic 2005 homepage that ensures that little time was spent by researchers on nudging. And beyond the façade, there is the even sturdier page of BOINC platform, the open-source software for volunteer computing employed by academic research projects around the world.
Open-source software is the paradigm of new forms of production. Together with distributed computing open-source software is indeed a defining part of how production could look like in a networked society. “Citizen science” is then not the name for what happens when you download a screensaver that warns you that your computer is being used by LIGO-scientists, but a broader ideal regarding how knowledge production is changing, including cutting-edge astrophysics.

CrowdMed and the nature of expert teams

The economist Ronald Coase argued in 1937 that the existence of firms requires an explanation and provided one, which has become classical. The existence of firms requires an explanation because it is always possible to organise production by outsourcing each of its phases, ie. searching for a seller of that phase in the market. This is theoretically more efficient than running a small command-economy, that is a firm with employees and material assets whose productive uses are centrally allocated by the firm managers. Hence we should expect that firms that outsource everything until they cannot be called “firms” any longer will outcompete the cumbersome central-planners to extinction.

Why then are there firms, and huge ones at that? Tu put it simply, Coase argued that markets are not smooth, but they come with “transaction costs”: a seller should be sought, information about its services/goods obtained, prices negotiated, etc. Firms emerge when these transition costs can be avoided by internalising some of the productive phases, thus outcompeting productors that opt for markets. As a result, the economy is organised by an admixture of planned economies (firms) and markets.

One way one can look at the so-called “sharing economy” (e.g. Uber, AirBnB, etc.) is from the standpoint of Coase´s theory. Transaction costs (esp. search costs) have been dramatically lowered by digital networks, and as a consequence very “light” firms with no hardware (no cars, no hotels, etc.) and few employees have emerged, liquidating older, more planned, economies. The “sharing economy” is simply an economy that is organised a bit more with markets, and a bit less by managerial central-planners.

One fact that is less appreciated is that the very same dynamic is at play in some forms of cognitive production.

Consider any cognitive problem, let us say a medical diagnosis. By definition, for any particular diagnosis, there must be a person or a group of person in the world that is able to make it more accurately and faster than any other: let us call it “the top team”. But what happens in fact is that a particular group of people appointed by hospital managers on the basis of qualifications will try to make the diagnosis. This is quite inefficient, as it is very unlikely that that particular group is the top team for any particular case. But of course, they are the best placed: they are physically in the hospital or they can be called up in the middle of the night from the affluent suburb where they comfortably dwell, whereas the unknown top team is, well, unknown. In technical terms, there are transaction costs in the search for the top team. That is why there are expert teams, and why we pay them a salary even if they mostly remain idle. That is why hospital managers appoint expert teams instead of painstackingly looking for, literally, the best in the marketplace for any particular case. Expert teams are always a second-best in terms of knowledge, but they are usually the first choice all things considered.

But of course, transaction costs are diminishing even for the search for expertise. That is ultimately why crowdsourcing is emerging in biomedicine and, more generally, in science. For many tasks and probably for the overwhelming majority of them, for instance for standardised tasks as diagnosing a seasonal flu, the top team will be only marginally better than the alternatives. For seasonal flus, even the top team in the world will not be good enough to make it efficient to incur in the transaction costs involved in its search, no matter how low the costs will become. But for extremely complicated cases crowdsourcing may help – and the less it costs, the more it helps. Illnesses that are recalcitrant to diagnosis are one such cases. The absence of the top team might mean long suffering or even death for an undiagnosed patient. That is why platforms as CrowdMed are emerging, and delivering promising preliminary results.

Could DTC Genome Testing Exacerbate Research Inequities?

The Data and IT in Health & Medicine Lab has published a commentary on the ethical implications of partnerships between social media companies and biomedical researchers on the Bioethics Forum of The Hastings Center:

The commentary highlights some ethical issues that could be exacerbated by participatory projects in genomic science conducted by DTC genetic testing companies: giving voice (and disproportionate representation in samples) to affluent, educated, advantaged participants.

Crowdfunding of clinical trials under scrutiny in Nature

The leading scientific journal Nature is hosting a debate on the crowdfunding of clinical trials. Crowdfunding is greatly facilitated by online platforms and is becoming increasingly common, including in the biomedical sciences. Asher Mullard reported in Crowdfunding clinical trials that investigators have found 20 crowdfunding projects, of which 62% had met their fundraising objectives, and that this channel of funding can become attractive as public and corporate funding decreases. However, there are concerns about whether crowdfunding is ethically appropriate, as these trials “side step scientific peer review, co-opting the ‘therapeutic hope’ of desperate patients to proceed despite potentially little or no scientific foundation” (Allard, ibidem.). A more resolutely negative view on crowdfunding has been defended by Phaik Yeong Cheah in Crowdfunding not fit for clinical trials, arguing that:

One problem is that funding recipients are not accountable to the public because crowdfunding is unregulated. Another is that there is no setting of research priorities, so crowdfunded clinical trials may not be the most important or widely applicable ones. And media tactics could attract emotional donations, for example by generating false expectations of a ‘cure’. Moreover, an inconclusive or negative outcome could erode public trust.

While these are all pressing issues, the next step of this argument is rather unconvincing. Cheah argues that “by contrast, the mainstream funding process for clinical trials takes into account disease prevalence, morbidity and mortality, justice and utility. Crowdfunding for clinical trials should be similarly regulated to mitigate its potential risks”. However, this vastly underestimates the problems of current funding systems. We have raised this point in the comment section of the website:

[Cheah´s] approach to the ethical assessment of crowdfunding is blind to the flaws of existing funding sources and mechanisms, thereby missing the opportunity to use crowdsourcing to address some of these flaws. A better ethical analysis would assess whether crowdsourcing can be employed (and regulated) as additional funding channel in order to address some of the limitations of extant funding processes, such as for instance the problem of orphan diseases.

David Hawkes and Melanie Thomson recently replied to Cheah along the same lines in Clinical trials: Crowdfunded trials doubly scrutinized, with very instructive references to “rare or emerging tropical diseases that might not otherwise attract financial support”. Interestingly, they argue that crowdfunding is “still governed by the same high standards of research integrity as traditionally funded recipients — but with the added scrutiny that comes with public engagement”.

Ultimately, it is a disagreement on emphasis. Cheah emphasizes the novelty and hence the unknown risks attached to crowdfunding, whereas Hawkes and Thomson are keenly aware of the problems of the current system and are ready to explore new channels to address them. The latter approach has one key advantage: even if crowdfunding does not become the next big thing in financing research, it allows us to diagnose those limitations of extant funding systems that – with or without crowdfunding – need to be fixed.