"The Common Good," a new essay in Nature by consulting editor Philip Ball, explores the growing use of collaborative methods to build and evaluate scientific efforts. Such methods make use of the cumulative wisdom of the scientific analysis of dozens, hundreds, sometimes even thousands of participants. Although the observations made by any single person may not be individually insightful, the accumulated (and, occasionally, averaged) efforts are often dazzling. While Ball's essay is not overly detailed, it covers a variety of projects -- some which many WorldChangers will already be familiar with, and some which are quite new.
Ball covers three broad categories of mass-collaborative science. The first I would characterize as mass analysis, in which large numbers of people take a look at a set of data to try to find mistakes or hidden details. His best example of this is the NASA Clickworkers project, which used a large group of volunteers to look at maps of Mars in order to identify craters. It turned out that the collective crater identification ability of volunteers given a small amount of training was as good as the best experts in the field. Ball links this directly to the James Surowiecki book, The Wisdom of Crowds, which argues that the collective decision-making power of large groups can be surprisingly good. WorldChanging's Nicole Boyer has mentioned The Wisdom of Crowds in a couple of her essays, most notably this week's The Wisdom of Google's Experiment. The ability of groups to act collectively to analyze and generate information is one of the drivers of collaborative efforts such as Wikipedia -- any individual contributor won't be an expert on everything, but the collected knowledge of the mass of authors is unbeatable.
The second model of collaborative science he discusses is that of mass evaluation, in which large numbers of people have the opportunity to vet articles and arguments by researchers. This is a less quantitative and more subjective approach than collaborative analysis, but can still produce high-quality results. Ball cites Slashdot and Kuro5hin as examples of this approach, with the mass of participants on the sites evaluating the posts and/or comments, eventually pushing the best stuff up to the top. In the world of science, articles submitted to journals are regularly checked out by groups of reviewers, but the set of evaluators for any given article is usually fairly small. Ball cites the physics pre-print journal arXiv as an exemplar of a countervailing trend -- that of open evaluation. ArXiv allows anyone to contribute articles, and lets participants evaluate them -- a true "peer review."
The third model Ball discusses is perhaps the most controversial -- that of collaborative research, where research already in progress is opened up to allow labs anywhere in the world to contribute experiments. The deeply networked nature of modern laboratories, and the brief down-time that all labs have between projects, make this concept quite feasible. Moreover, such distributed-collaborative research spreads new ideas and discoveries even faster, ultimately accelerating the scientific process. Yale's Yochai Benkler, author of the well-known Coase's Penguin, or Linux and the Nature of the Firm, argues in a recent article in Science (pay access only) that such a method would be potentially revolutionary. He calls it "peer production;" we've called it "open source" science, and have been talking about the idea since we started WorldChanging.
This is neither a utopian vision of "citizen science" nor a "Science Survivor" where the least popular theories get voted off the island each week. All three of these models are based on the mass participation of people who are at least amateur scientists, and who can demonstrate some understanding of the processes involved. The Clickworkers project required a moderate amount of training, evaluative comments on arXiv from those without a physics background will likely be ignored, and "peer production"/"open source" scientific research will be open to those who actually know how to carry out the proper experiments. Such "mass elitism" is not without precedent; Free/Open Source Software development is open to anyone who wants to participate, but does not usually accept code contributions from people with marginal programming skills. Functional "wisdom of crowds" approaches are predicated on the assumption that the crowds comprise people who are familiar with a given subject enough to even be able to speculate on the right answer to a problem.
All three of these methods are based on the fundamental logic of the open source concept: with many eyes, all bugs are shallow. The more participants you have, the greater the breadth of knowledge and experience, and the greater the ability to find subtle problems or hidden surprises. The open science approach is potentially invaluable -- and it's in the best traditions of science itself, which has always flourished best in a world of critical engagement, open discourse, and cooperation.