So the real question about information and group scaling is this: are there procedures for separating good information from false information (”discrimination”) that are effective enough to allow groups to be scaled indefinitely without a loss of information quality? It’s an article of faith in the Wikipedia “community” that such procedures exist, and that they’re essentially self-operative. That’s the mythos of “emergence”, that systems, including human systems, automatically self-organize in such a way as to reward good behavior and information and purge bad information. This seems to be based on the underlying assumption that people being basically good, the good will always prevail in any group.Readers of this blog know that I would argue that many religious and political beliefs are examples that support Bennett's position.
On a related point, Ed Felten has a recent post about how reputation systems on the Internet can be manipulated, referencing a pair of articles at Wired by Annalee Newitz. A common flaw is that the reputations of the raters themselves is either not taken into account or is easily manipulated. If there were a way of reliably weighting expertise of raters within appropriate knowledge domains, that could provide a method of discrimination to sort out the good from the bad information.
This is a subject that my planned (but never completed) Ph.D. dissertation in epistemology (on social epistemology, specifically on obtaining knowledge based on the knowledge of others) at the University of Arizona should have touched upon.
One philosopher who had touched on this subject at the time I was working on my Ph.D. (back in the early 1990s) was Philip Kitcher, whose book The Advancement of Science: Science without Legend, Objectivity without Illusions (1993, Oxford University Press) contains a chapter titled "The Organization of Cognitive Labor" (originally published as "The Division of Cognitive Labor" in the Journal of Philosophy, 87(1990):5-21).