Paper Review: “Antiscience Zealotry”? Values, Epistemic Risk, and the GMO Debate

From climate change to vaccinations to the shape of the Earth (???!?!), scientific claims are often in dispute. Indeed, when encountering people who hold views against the (scientific) norm, we often think of them as “anti-scientific.”

Of course, the people who hold these views don’t think they are being irrational. They think their position is the better one, and they charge scientists and people who believe them with having bad motivations, or there being some kind of conspiracy, or being funded by corporations, et cetera.

In a sense, this perception of the disagreement is what Justin Biddle characterizes as a kind of dirty disagreement. Both sides think the other is ignorant, biased, irrational, and/or dishonest.

However, Biddle urges us to reconsider the character of these kind of debates in general, and in particular of the debate about genetically modified organisms. Instead, “the GMO controversy is more accurately characterized in terms of deep disagreement” (p. 376).

***The original paper can be found here***

***My Epistemic Status: Actually feeling fairly confident in my conclusion at the object level, but values in science is not my field, and so I have a kind of healthy meta-uncertainty about my conclusions at the end. So, to make your best judgement consider my argument, but also read the original paper and read the surrounding literature.***

Following Lynch, Biddle takes a deep disagreement to be a disagreement about the kind of methods which are most appropriate or reliable in a given domain. However, Biddle thinks that deep disagreement can be broader than this:

Deep disagreement can involve disagreement about what kinds of evidence are relevant to a given hypothesis; and while disagreements over what counts as evidence can sometimes result from disagreements over methods, they can also result from other sorts of disagreements. (p. 376)

The other sorts of disagreements that Biddle has in mind are things like framing, and the kind of values that go into science. For example, he claims that the pro-GMO side often frame the GMO debate as whether or not GMOs negatively (or positively) affect public health and safety. The anti-GMO side, on the other hand, frames the debate in a broaded way–for example, they care about how the patents on such seeds by the seed companies can harm farmers through lawsuits, the evolution of herbicide/pesticide-tolerant plants or animals, and the accidental flow of GM seeds onto organic farms (p. 366).

Thus, values enter into science through the framing of the issue. Science itself cannot settle the framing–or, rather, the framing is a necessary way in which Biddle thinks values enter science.

Biddle also discusses one of the ways philosophers of science often discuss values in science–inductive risk. Biddle’s example is helpful:

Consider, for example, hypothesis H, which states that exposure to dosage D of pesticide P increases the risk of cancer in human beings. Wrongly accepting H will have consequences for many stakeholders, but those consequences will be particularly significant to industries that profit from the sale of P. Alternatively, rejecting H if H is true will also have consequences for many stakeholders, but those consequences will be particularly significant to individuals who are exposed to P at D (e.g., agricultural laborers). In determining how much evidence is enough to accept (or reject) H, scientists presuppose value judgments about the various possible consequences of being wrong. In this way, the acceptance (or rejection) of hypotheses on the basis of statistical evidence presupposes ethical value judgments. (p. 364)

Thus, Biddle (and others–he gives a thorough review of the literature on this topic) thinks that values always enter into our science, even at such seemingly objective parts such as our statistical methodology. Furthermore, not only does he think that this occurs at the level at the choice of p-value, but also when deciding the power level of our test.

Following Whilholt, Biddle takes the power of a study to be “the rate at which an investigation delivers definite results” (p. 374). I take issue with this way of thinking of statistical power–in particular, the result of a single study is never definitive, and power is usually defined as the probability that the test rejects the null hypothesis when a specific alternative hypothesis is true. However, this doesn’t seem to be particularly relevant for the thrust of his argument; his main claim about power is that, just as with the choice of a significance threshold for p-values, our non-epistemic (for example, moral, social, and political) values can affect how high-powered we wish our study to be.

Thus, since values are entering into our science at not only how we choose to apply our best science, but how we in fact conduct our best science, “there is space for rational disagreement in the assessment of health and safety risks of GM crops” (p. 375) and “the GMO controversy is more accurately characterized in terms of deep disagreement” (p. 376). As a general conclusion, it is

important that philosophers of science, scholars of science and technology studies, and others contribute to controversies in science and technology by emphasizing that rational disagreement is not only possible by pervasive. Doing this could help significantly to improve communication and raise mutual understanding (p. 377).

Biddle does an excellent job of pulling in examples from the actual debate on GMOs to make his case that values enter this debate, even at the level of what counts as good science. Thus, while I do not disagree that such values do in fact play a role in how science is currently done, I do disagree that the proper conclusion to draw from this is to emphasize the room for rational disagreement in science for the purposes of improving the state of the debate.

To my mind, the discussion of p-values and statistical power highlights that this kind of frequentist statistical methodology is deeply flawed. It may be true that, given that most science is currently done with such methods, the ways in which these methods are applied may be influenced by our values. However, to me the correct response doesn’t seem to concede and say, “well, I guess there is room for rational disagreement, let’s all be nice to each other” — what do we do with that?

Instead, as I think is often the case, the answer to such a problem/debate seems to call for better science. In particular, I think we need to separate our epistemology from our decision theory.

What do I mean by this? Recall from earlier Biddle’s example about using hypothesis testing to determine whether or not a certain dosage of pesticide increases the risk of cancer in human beings. The way this is supposed to work is that scientists have two hypotheses: one in which it does not, and one in which it does. There was risk here because accepting or rejecting hypotheses would have effects on people. Thus, we had to use our values (for example, how much we care about people getting cancer and how costly not using the pesticide would be) to see our statistics threshold.

This seems insane to me. We have an incredibly well developed account of decision making , and it most certainly does not take as input only values and whether or not we “accept” or “reject” a hypothesis. The kind of information we need is much more fine grained. In particular, decision theory gives an agent (whether a government, a scientist, or a member of the public) normative recommendations based on the agent’s utility (think goals or values) and beliefs. Belief in this case is a probability distribution. This, I claim, is the better way to think about both science and the influence of values on it.

There is a type of statistics, Bayesian statistics, that is designed for the purpose of giving such a fine grained probability distribution. Thus, instead of being in the embarrassing spot of having our values directly influence our statistical methodology, we can cleanly separate out our epistemology from our decision theory.

Some readers may think that I am making a mistake, and that it has been known for a long time now that science is not value-free, and that I am being naïve in my proposal. However, I am not saying that the whole enterprise of science is value free. Values determine which problems we want to tackle, and what to do with the answers (posterior probability distributions over hypotheses) once we get them. However, I do think that there is a way to not have values play a role in the statistical methodology itself, like Biddle and company would claim. In particular, as stated above, by moving towards a Bayesian approach to science, we can have a clean separation between our values and our epistemology (once we have chosen which projects to pursue based on our values–we could also use decision theory for this!).

Thus, overall, while I agree with Biddle that values currently influence the practice of science, and that the GMO case is a clear example of this, I disagree that the conclusion we should draw is that there is room for rational disagreement–that doesn’t seem to help much. Instead, I think that by moving towards a more Bayesian decision-theoretic approach to science, we can have the debate at the right level–the level of our values. We need not mix the scientific (what I would think of as epistemic) disagreement with our disagreement of values. Again, I am not making the extreme claim that values do not interface with science at all–they determine which scientific enterprises we choose to pursue, and, in conjunction with decision theory, they translate the scientific findings into courses of action. However, once she has chosen her project based on her values, when a Bayesian statistician is conducting her research no values are entering into her statistical methodology; that is what the decision theory is for. Thus, I claim, if we are to move forward in debates like the one about GMOs, it will be through discussing our values; once we have these in place, science/decision theory will tell us what to do–without room for rational disagreement at the level of the science.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s