Public understanding of science research shows individuals surveyed to be remarkably ignorant of particular facts about science, but is that the right measure of how science is understood and used by the public at large? Such surveys ask about disconnected facts independent from a context in which they might be used, and measure only an individual’s personal knowledge. If, instead, those surveyed were asked who among their friends would they rely upon to obtain the answer to such a question, or how would they go about finding a reliable answer to the question, the results might prove to be quite different.
Context can be quite important. In the Wason selection task, individuals are shown four cards labeled, respectively, “E”, “K,” “4,” and “7,” and are asked which cards they would need to turn over in order to test the rule, “If a card has a vowel on one side, then it has an even number on the other side.” Test subjects do very well at recognizing that the “E” card needs to be turned over (corresponding to the logical rule of modus ponens), but very poorly at recognizing that the “7,” rather than the “4,” needs to be turned over to find out if the rule holds (i.e., they engage in the fallacy of affirming the consequent rather than use the logical rule of modus tollens). But if, instead of letters and numbers, a scenario with more context is constructed, subjects perform much more reliably. In one variant, subjects were told to imagine that they are post office workers sorting letters, and looking to find those which do not comply with a regulation that requires an additional 10 lire of postage on sealed envelopes. They are then presented with four envelopes (two face down, one opened and one sealed, and two face up, one with a 50-lire stamp and one with a 40-lire stamp) and asked to test the rule “If a letter is sealed, then it has a 50-lire stamp on it.” Subjects then recognize that they need to turn over the sealed face-down envelope and the 40-lire stamped envelope, despite its logical equivalent to the original selection task that they perform poorly on.
Sheila Jasanoff, in Designs on Nature, argues that measures of the public understanding of science are not particularly relevant to how democracies actually use science. Instead, she devotes chapter 10 of her book to an alternative approach, “civic epistemology,” which is a qualitative framework for understanding the methods and practices of a community’s generation and use of knowledge. She offers six dimensions of civic epistemologies:
(1) the dominant participatory styles of public knowledge-making; (2) the methods of ensuring accountability; (3) the practices of public demonstration; (4) the preferred registers of objectivity; (5) the accepted bases of expertise; and (6) the visibility of expert bodies. (p. 259)She offers the following table of comparison on these six dimensions for the U.S., Britain, and Germany:
United States Contentious | Britain Communitarian | Germany Consensus-seeking |
1 Pluralist, interest-based | Embodied, service-based | Corporatist, institution-based |
2 Assumptions of distrust; Legal | Assumptions of trust; Relational | Assumption of trust; Role-based |
3 Sociotechnical experiments | Empirical science | Expert rationality |
4 Formal, numerical, reasoned | Consultative, negotiated | Negotiated, reasoned |
5 Professional skills | Experience | Training, skills, experience |
6 Transparent | Variable | Nontransparent |
She argues that this multi-dimensional approach provides a meaningful way of evaluating the courses of scientific policy disputes regarding biotech that she describes in the prior chapters of the book, while simply looking at national data on public understanding of science with regard to those controversies offers little explanation. The nature of those controversies didn’t involve just disconnected facts, or simple misunderstandings of science, but also involved interests and values expressed through various kinds of political participation.
Public understanding of science surveys do provide an indicator of what individuals know that may be relevant to public policy on education, but it is at best a very indirect and incomplete measure of what is generally accepted in a population, and even less informative about how institutional structures and processes use scientific information. The social structures in modern democracies are responsive to other values beyond the epistemic, and may in some cases amplify rational or radical ignorance of a population, but they may more frequently moderate and mitigate such ignorance.
Sources:
- Eurobarometer Biotechnology Quiz results from Jasanoff, Designs on Nature, 2005, Princeton University Press, p. 87.
- U.S., Canada, Netherlands survey results from Thomas J. Hoban slide in Gary Marchant’s “Law, Science, and Technology” class lecture on public participation in science (Nov. 16, 2009).
- Wason task description from John R. Anderson, Cognitive Psychology and Its Implications, Second Edition, 1985, W.H. Freeman and Company, pp. 268-269.
In answer to the question, I guess a point of reference is needed.
ReplyDeleteThe ignorance that people point to in surveys is misleading at best and most fail to compare the findings to those of previous years. It's actually improving. However, I what I'd really like to comment on is the Wason 4-card task.
It is clear that, in general, people are positively terrible at this task and at conditional reasoning in general. When certain real-world scenarios are used, people generally perform well. However, "perform well" is not the same thing as "reason well". One may come to the correct answer many ways. In this case, people come to it by using pragmatic schemas. That in no way implies that they are rational.
Bill Mahr is an atheist. I find this view to be the most rational conclusion. However, he holds a great many views that are, imo, irrational. Who knows how he arrived at the same conclusion I arrived at?
@badrescher I think it depends on how you define rational. I also don't find Maher to be very rational because he doesn't seem to come to reasoned conclusions. He doesn't think; just reacts. Further, he seems impervious to expert testimony. Actually, let me rephrase that. He seems unable to determine which experts to trust.
ReplyDeleteBut what about that old-time atheist whipping boy Ken Miller? Many atheists would say he isn't rational despite his scientific credentials because he doesn't follow the science where they think it should take him in respect to his religion. Yet I have no problem calling Miller rational. It's true, he operates within a bounded rationality but then I think we all, everyone of us, do.
So I think defining rational is a very hard thing to do and perhaps we almost have to go with the tautological "rational is what rational people do/think."
@Jim Fascinating post.
@neuralgourmet, true, but I don't define it like Gigerenzer does and I think that definition is a bit of a cop-out.
ReplyDeleteWhat seems to be the most "settled" definition in the literature is 2-fold (holding beliefs consistent with evidence is part of it), but it certainly involves optimal choices, not "whatever works" choices. By those definitions and mine, I find Ken Miller rational enough and for the same reasons you do. Limiting the scope of what one is willing to question is the opposite of irrational, imo.
Whether or not it is hypocritical or contradictory is another story, but rational? Sure it is. At least until we start talking about WHY he limits it... ;)
@badrescher I've yet to read any of Gerd Gigerenzer's books, but I do think that oft times many skeptics and atheists work with an overly restrictive definition of rationality.
ReplyDeleteSatisficing in the evolutionary environment is more likely than optimality, and that may mean worse than satisfactory in more abstruse intellectual environments.
ReplyDeleteFrom a philosophical perspective, when you try to get a universal, optimal rationality, obstacles arise not just from practical considerations (which are large enough), but from limitations we've discovered in mathematics and logic--which motivated philosophical treatments like Christopher Cherniak's _Minimal Rationality_ (1986, MIT Press) and Gilbert Harman's _Change in View: Principles of Reasoning_ (1986, MIT Press).
The data in Kahneman, Slovic, and Tversky's _Judgment under Uncertainty: Heuristics and Biases_ (1982, Cambridge Univ. Press), while it shows failings against an optimal standard, isn't really so bad as it originally seemed, was it? These heuristics and biases are important for us to know and guard against, especially in institutional and policy-related contexts, but they may not be all-things-considered irrational in practice.
An important critique of the public understanding of science model, that I didn't make explicitly, is that it really is just a test of rote memorization, rather than reasoning skill or scientific methodology.
"These heuristics and biases are important for us to know and guard against, especially in institutional and policy-related contexts, but they may not be all-things-considered irrational in practice."
ReplyDeleteJim, you're much more aware of the literature than I am. Is it any easier trying to come up with what clearly counts as irrational, then backing off from there to get at an idea of what's rational?
Actually, I'm thinking that the kind of rationality we think of, that we inherited from the Enlightenment philosophers, is actually a whole bunch of separate things all taken together.
Neural Gourmet: When I was waist-deep in epistemology, most of the philosophers I read tended to avoid the term "rationality" in favor of "knowledge," "justification," "epistemic norms," and similar talk, just because there were too many conflicting notions of rationality. Cherniak is an exception.
ReplyDeleteOne of my professors at UA, the late John Pollock, did talk about rationality as rules of human reasoning, which he attempted to implement in an artificial intelligence system called OSCAR. Your question led me to find this paper of his online, "Epistemology, Rationality, and Cognition."
There are definitely multiple concepts out there, including rules or norms about what it is reasonable to believe and rules or norms about what it is reasonable to do. And disagreement how those rules all fit together into a coherent whole.
Pollock, Harman, and Cherniak all are writing about defeasible, non-monotonic reasoning, where your set of beliefs can contract as well as expand, as new beliefs defeat or undermine existing beliefs. Cherniak proposes a condition of "minimal consistency", where a rational agent will act to eliminate some but not necessarily all inconsistencies; Harman adopts a principle of conservatism that says we tend to retain beliefs and only clean up inconsistencies as they become salient.
Economists, game theorists, and decision theorists have different notions of rationality--out of my field.
Thanks for the link to Pollock's paper (which is incredibly readable for that sort of thing). I just skimmed through it but he has some interesting ideas, principally that irrational behavior solely results from an inability to override Q&I ("Quick and Inflexible") modules. He seems to be talking about innate heuristics, such as our ability to very quickly predict trajectories of flying objects. We don't have to stop and reason through where a tossed ball will be in order to catch it. We just "know" and act accordingly.
ReplyDeleteHis Q&I modules appear to be just another name for heuristics to me, although perhaps he means something broader. I'll have to read the full paper but he's certainly right I think in saying that heuristics are an epistemic condition. Indeed, I think modern neurosci research has shown that it's very hard for people to disregard a highly successful heuristic and reason through a problem instead. The Wason Selection Task is a demonstration of that I believe.
So by defining rationality as the inability to override heuristics or Q&I modules, it seems that Pollock takes a stricter approach to rationality than I do. I think he's saying that any time in which we don't engage in explicit reasoning, i.e. not relying on a Q&I module, then we are thinking irrationally.
My personal view is that heuristics are part and parcel of rational thinking and that irrational thinking occurs when a person indiscriminately relies on them even after having had that heuristic demonstrated as not applying in a certain situation. In fact, in the majority of cases, relying on heuristics is the rational thing to do because heuristics allow us to come to decisions quickly with little energy expenditure. If we see an ad on TV that makes an offer that seems "too good to be true" we know from experience that it probably is, so the safe bet is to simply assume that it's bogus or a scam.
Where we get into trouble is when we attempt to apply a heuristic to a situation that's novel and outside our everyday experiences. For instance, 9-11 conspiracy theorists often rely on what they think a controlled demolition of a building looks like to conclude that WTC building 7 was brought down by a controlled demolition. Their heuristic of what a controlled demolition looks like isn't at fault. The collapse of WTC 7 does look a lot like what a controlled demolition looks like. The problem is that they're relying on that heuristic instead of considering that there might be multiple causes for a building's collapse that will result in a collapse that looks a lot like a controlled demolition.
Now, I don't think applying the "controlled demolition" heuristic to the collapse of WTC 7 is the irrational act. To me the irrationality comes in when after a 9-11 conspiracy theorist has had it demonstrated multiple times that they're idea of what a controlled demolition looks like doesn't apply in this case, and is counterfactual (i.e. we know WTC 7 collapsed due to structural damage from falling debris and fire).
Neural Gourmet: "I think he's saying that any time in which we don't engage in explicit reasoning, i.e. not relying on a Q&I module, then we are thinking irrationally."
ReplyDeleteNo, he doesn't quite say that--rather, he says that when we have reason to believe that explicit reasoning gives a different result than a Q&I module, we need to go with the explicit reasoning on pain of being irrational--see the penultimate sentence of section 3. This seems to me compatible with your position (and it seems right to me).
Jim, you're right. I see that now. I was skimming through it really fast this morning.
ReplyDeleteHe says a lot of interesting things in that paper. I do wonder though at:
"To do that, we must be able to inspect candidate expansion of argument sketches and evaluate them as good or bad arguments. But that just amounts to judging whether, if we reasoned in that way, we would be conforming to the dictates of rationality. Thus an essential feature of rational cognition must be the built-in ability to judge whether particular bits of cognitive behavior conform to the dictates of rationality."
How much of that ability is built into humans and how much is learned? I suspect that whatever built-in abilities we have along that line are fairly minimal and the rest come from learning. If that's the case though then we might be treading on the dangerous ground of saying that it's possible for some cultures to be more or less rational than others.
I think you need a certain amount to be built in (evolved, really) in order for reasoning to get off the ground in the first place, and that many of the cognitive tools we use are learned (e.g., look at the development of mathematics, logic, science, and philosophy). Pollock also focuses on the individual cognizer, and not groups and institutions, which also play a part in how knowledge gets produced and validated.
ReplyDelete