Current Controversies in Values and Science
Full Title: Current Controversies in Values and Science
Author / Editor: Kevin C. Elliott and Daniel Steel (Editors)
Publisher: Routledge, 2017
Review © Metapsychology Vol. 21, No. 42
Reviewer: Vincenzo Politi
Current Controversies in Values and Science, edited by Kevin Elliott and Daniel Steel, is the latest instalment in the series ‘Current Controversies in Philosophy’, published by Routledge. Like all the other volumes of the series, the essays of this book revolve around a number of questions, each answered first in the positive and then in the negative by philosophers holding juxtaposed views.
The first question is: “Can we distinguish epistemic from non-epistemic values?”
For Hugh Lacey, we can (and we should). In his view, those philosophers who argue that we cannot distinguish between epistemic and non-epistemic values usually speak about the role of the latter in ‘accepting’ a scientific theory. In this way, however, several different attitudes towards scientific theories are unjustly put under the same label. Lacey, in short, highlights the fact that scientific theories are not just accepted or eliminated.
For Lacey, cognitive values are “essential to evaluating whether a theory provides adequate and well-founded understanding of a particular set of phenomena” (p. 16); they are essential, that is, to evaluating when a theory can be impartially held. This is not say that non-cognitive or social values play no role whatsoever in science. While cognitive values help us to recognise which theories should be impartially held, social values can tell us when a theory should be adopted (that is, “used for the sake of framing and giving direction to ongoing research in a given scientific area, and of testing the range of phenomena of which the theory can come to incorporate understanding”, p. 20), or when a claim can be endorsed (that is, “judged to be sufficiently well supported by available evidence to warrant acting or making decisions in ways informed by it”, p. 27).
Lacey’s position leads to at least two consequences. First, it calls for a redefinition of the distinction between epistemic and non-epistemic values. For instance, ‘fertility’, which has been traditionally considered to be part of the family of epistemic values, tells us which theories we should adopt, not which theories can be impartially held: in fact, that a theory is viewed as fertile does not guarantee that such a theory will automatically generate understanding. Fertility, therefore, is not an epistemic value. Second, it is Lacey’s contention that his revisit of the epistemic/non-epistemic values distinctions contributes to make explicit the methodological usefulness of such a distinction.
Phyllis Rooney, by contrast, argues that there can be no strict distinction between epistemic and non-epistemic values. Rather, big chunks of scientific research takes place within a “robust borderland” of values (p. 34).
Rooney reminds us, and rightly so, that, although philosophers began to generally speak about epistemic values, as opposed to the non-epistemic ones, with time they started to see the necessity of differentiate between ‘epistemic’ (i.e., conducive to the truth) and ‘cognitive values’ (i.e., values facilitating the cognitive operations necessary to theorising, such as, for example, simplicity). ‘Non-epistemic values’, however, may also be a rather diverse lot. They can be, for example, ‘moral’, ‘social’, ‘political’ or even ‘religious values’.
About the methodological usefulness of a strict distinction between all these values, Rooney argues that the use of some non-epistemic values may actually serve the overall epistemic aims of science. For instance, well known feminist studies of science have uncovered a number of biases in fields such as primatology, which, for a long time, were male-dominated. By discarding the more or less masculinist biases, and inviting scientists to consider new hypotheses and look for new kinds of evidence, non-epistemic values actually improved science.
A supporter of the epistemic/non-epistemic values distinction could still respond that, precisely in the cases discussed by Rooney, the feminist critique did not make science more ‘value laden’. On the contrary, by liberating primatology from its implicit masculinist biases, feminist studies contributed to make such a field ‘value freer’, leading scientists to consider new hypotheses on the basis of epistemic values alone, rather than because such hypotheses conformed to their prejudices. One could re-read Rooney’s examples through Lacey’s lenses: non-epistemic values allowed scientists to ‘consider’ the adoption of alternative theories which, however, began to be ‘impartially held’ only on the basis of epistemic considerations.
In the end, the matter of disagreement between Lacey and Rooney is very subtle: the previous admits that non-epistemic values play a role in science, the latter claims that the epistemic/non-epistemic value distinction may be a matter of degree and that, if some contexts are characterised by a robust borderland of values, in other contexts the distinction might be more discernible. Ultimately, the uninitiated reader may fail to appreciate the exact scope of the controversy between the two philosophers.
The second question is: “Must science be committed to prioritising epistemic over non-epistemic values?“. The role of non-epistemic values in science notwithstanding, this question asks whether we should still give priority to the epistemic values.
Daniel Steel claims that we should prioritise the epistemic over the non-epistemic values. He argues that those who believe that epistemic values have no special priority in science adopt the aims approach, for which the influence of non-epistemic values in science is legitimate and non-secondary, insofar as such values promote the aims of scientific inquiry. The problem with this view is that, sometimes, scientific inquiry may be guided by aims conflicting with the very ideal of scientific knowledge and may actually result in the corruption of science.
Inspired by Henrik Ibsen’s play An Enemy of the People, Steel define this problem in terms of the “Ibsen predicament”. In an Ibsen predicament, the community at large highly values some aims or objectives, to the point that any scientific research which may ostacolate their accomplishment will be either dismissed or bent in vicious ways. Clear examples of Ibsen predicaments are the ‘politicised’ or even ‘corrupted’ scientific studies on drugs or fossil fuels. In both cases, those who fund the scientific research (pharmaceutical companies or governments) prioritise their set of non-epistemic values (wealth, technological innovation, and so on) and may ‘bent’ the actual scientific results should these conflict with such values (in the case, for example, of results indicating the harming effects of a new drug, or of results suggesting the potential dangerous impact of fossil fuels on environment).
By contrast, Steel holds a view known as qualified epistemic approach. Such an approach does not deny that science plays an important role in society. However, it is argued, the reason for why science plays such an important role in society is precisely because science aims first and foremost at advancing knowledge. Therefore, non-epistemic values may sometimes guide, but should not interfere with, the advancement of scientific knowledge.
Matthew Brown rejects the ‘Epistemic Priority Thesis’ (EPT), which Steel’s qualified epistemic approach is one particular version of, and argues that we should not prioritize epistemic over non-epistemic values in science. He develops (at least) three different criticisms to EPT.
First, EPT is, in some cases, unjustified. In fact, in the actual scientific practice, epistemic and non-epistemic values may happen to be mingled in complicated ways. This is why, for Brown, “[we] require values to select epistemic standards, interpret them, and determine how to apply them; they are intertwined and interrelated in such a way that talk of ‘priority’ doesn’t make sense” (p. 69).
Second, Brown argues that EPT is associated with some forms of non-cognitivism or anti-realism about moral values. Value judgments, however, are not just a matter of subjective preferences: indeed, there could be very good reasons for holding some values, and there could also be some very good reasons for preferring a science which holds such values.
Finally, “EPT treats epistemic standards as criteria for successful scientific inquiry, rather than as values that are good if we can have them” (p. 72). There is no absolute and universal set of epistemic criteria that ‘good science’ must satisfy in order to be qualified as such. Scientific research is ‘good’ insofar as it solves the very problems which prompts it.
Of Brown’s three criticisms to EPT, this is the least developed and, probably, the more unfair. Although Brown is right in pointing out that there is no such a thing as an eternal and absolute Archimedean point of scientific reason, one can still regard the epistemic values prioritised by EPT as ‘minimal’ criteria of scientificity. Surely, we wouldn’t like a science which does not satisfy any epistemic criteria whatsoever and which, therefore, would be undistinguishable from other forms of non-scientific and non-epistemic human enterprises. Furthermore, although it is true that the ‘goodness’ of scientific research lies in its problem-solving power, we accept a solution to a problem as ‘scientific’ when we can explain and understand why such a solution works the way it does: that is, when the epistemic rationale of the proposed solution is made clear.
It seems like Brown’s aim is to reject a ‘strong’ version of EPT. On the one hand, it must be wondered, however, up to which point his criticisms can successfully attack Steel’s view, which is, after all, a form of ‘qualified’ epistemic priority. On the other hand, Brown himself agrees about the importance of epistemic considerations in science and denies that claims and theories can be advocated on the basis of non-epistemic values alone, with no consideration for empirical evidence (pp. 75-76).
The problem with the second question is similar to the problem with the first question: namely, that it may be difficult to see what the two opposed parties are really disagreeing about. The controversy between Steel and Brown may be just a matter of emphasis: while the previous prefers to stress the epistemic character of science (without denying the guiding and constraining role of non-epistemic values), the latter is more concerned with the ethical and moral dimension of scientific practice (subject to the fact that science is, first and foremost, an epistemic enterprise).
I think that the real disagreement between Steel and Brown revolves around some implicit meta-philosophical views about what science is and what science ought to do.
For Steel not only accept that science is an epistemic activity, but also adds that, indeed, “there is social value in having a number of institutions dedicated to prioritizing different aims” (p. 61). What Steel is saying is that it is not the scientists’ job to tell society what is good and what is bad. Rather, what scientists ought to do is to tell us what the empirical data and our best theories re suggesting. It is not entirely clear, however, whether Steel’s qualified epistemic priority can be of any practical help in the case of those very Ibsen predicaments he regards as terribly problematic for the aims approach, but not for his own. Scientists can go on doing their epistemic job, prioritizing the epistemic values, but, in the end, it may still be the case that society decide to dismiss or ignore what scientists are saying. Steel’s view, therefore, saves the internal ‘epistemic dignity’ of science, but leaves too much in the hands of a social majority, which may decide not value science to begin with.
Brown, instead, wants a social and ethical responsible science. Although it would be probably hard to find anyone arguing in favor of an ‘irresponsible science’, sometimes Brown’s position risks to relapse in a sort of ‘paternalistic attitude’. Brown argues that scientists qua scientists have the moral responsibility not to disseminate works which may put social justice at risk. He begins his chapter by discussing the imaginary example of some psychologists who find some correlation between race and intelligence. Knowing the potential impact that such a work may have in a society riddled by prejudices and, in some case, blatant racism, the ethical psychologists ought not to publish their work. The problem here is that things like ‘racism’ may be in the eye of the beholder, or, in this case, of the public stakeholder of science. For a scientist, it may not always be easy to foresee which scientific results may be interpreted as racist. While Brown’s example looks rather forward, actual science can be full of grey areas.
For example, medical studies have found that Asian women are at a very high risk of osteoporosis. That is because Asian women consume less calcium. The low consumption of calcium by Asian women, however, has nothing to do with social or cultural factors alone. It depends, instead, by the fact that 90% of them is lactose intolerant. And the reason for why they are lactose intolerant is that Asians have evolved in such a way that their genes direct a slowdown in the production of lactase, the enzyme responsible for the ‘breaking’ and assimilation of lactose. Furthermore, on average, Asian women are, on average, small framed; this can be a further factor in the development of osteoporosis. Now, in some societies these data may be interpreted in a way which support prejudice and racial biases against Asian people, who could be considered ‘weaker’ and more inclined to physical problems. The potentially dangerous impact of these ideas notwithstanding, it is also in the interests of Asian women to know that they may be at a higher risk of developing osteoporosis. It is not the scientists’ fault that their results may support racist views. Furthermore, it is not the responsibility of scientists (who, after all, represent an unelected minority of experts) to be the moral guide of a democratic society.
Both Steel’s and Brown’s positions have strengths and weaknesses. Making their views about the role of science in society more explicit could have helped to make the controversy between the two of them clearer and, maybe, fiercer.
The third question is: “Does the argument from inductive risk justify incorporating non-epistemic values in scientific reasoning?“. The argument of inductive risk is developed in the light of uncertainty in science. Whether empirical data are strong enough to support a claim is not something which can be understood by looking at the data alone. Taking a decision, therefore, always involves some risk.
Heather Douglas claims that social and ethical considerations ought to play a crucial role in scientific reasoning, since they help scientists to assess the potential risk of errors. It must be added that, for Douglas, although epistemic criteria are not sufficient for assessing evidential strength, they are nevertheless a necessary component of science: under this respect, her view is closer to Steel’s qualified epistemic priority approach.
Douglas’ position is both descriptive and normative. She claims that scientists, as a matter of fact, are already using social and ethical value judgments, especially in research which may have a strong public impact – for example, on assessing the potential risks associated with some new types of chemicals.
Furthermore, after taking into consideration the often endemic disagreement of the scientific community, which may be exacerbated in cases of difficult and risky decisions, she suggests that scientists ought to make explicit their guiding values. In this way, they would allow the general public to check whether scientific research is being conducted with integrity, and whether scientists’ values are appropriate.
Gregor Betz begins by distinguishing different ways in which our knowledge can be limited. In fact, uncertainty can be had about the quality of data collected, the parameters of a model, the justification of our theoretical assumption, our methodologies, and so on. Such uncertainties are rather common in science and do not justify the implementation of any particular social or ethical values, if not in a rather weak sense.
The force of the argument of inductive risk can be appreciated in cases of ‘deep uncertainty’. Betz argues, however, that in such cases scientists do not have to make explicit the set of values guiding their considerations. Rather, what they ought to make explicit is uncertainty itself. This can be done in several different ways: for instance, scientists may run some scenario analysis and spell out the consequences of every potential alternatives.
Ultimately, for Betz, science policy is characterized by a rather neat division of labor: scientists provide data and of the different risk associated to different decisions, but, in the end, it is the policy makers who decides.
Betz concludes by saying that not only scientists do not incorporate social and ethical values in their decision but, also, that they ought not to: if science begins to be seen as subscribing to non-epistemic values, society may begin to question its epistemic authority, regard it as yet another enterprise guided by personal interests and, finally, stop trusting it.
Question 4 asks: “Can social diversity be best incorporated into science by adopting the social value management ideal?“. On answering this questions, both Kristina Rolin and Kristen Intemann do a great job explaining why social diversity should be incorporated into science to begin with.
Rolin’s and Intemann’s starting point in the influential work of Longino (1990, 2002). Longino develops a view known as contextual empiricism, for which justification is always relative to a set of background assumptions. The more diversified the background assumptions in science, the higher the chance of uncovering implicit biases, which may threaten the impartiality of science. In this view, sexist or racist theories are not to be excluded because sexism and racism are ‘bad non-epistemic values’, but because such values are conducive to bad science.
For Longino, then, the so-called ‘value-free’ ideal of science is not only untenable, but also undesirable. In her view, scientific objectivity is not the (somehow legendary) ‘view from no-where and no-one in particular’, but, rather, the product of the concerted and dialectical convergence of views coming from different standpoints. In this view, then, objectivity is a sort of constant collective ‘work in progress’.
To attain objectivity, for Longino, an ideal scientific community should conform to her ‘social value management ideal’, for which:
1. there must be recognised venues for criticism
2. the community as a whole must be engaged and be responsive to criticism
3. there must exist shared standards for criticisms
4. participants must be granted equality of intellectual authority
Granted that there are very good reasons to believe that social diversity ought to be implemented in science, it remains to be seen how such an implementation is supposed to be accomplished.
For Rolin, Longino’s social value management ideal is sufficient for the incorporation of social diversity into science. Such an ideal has several benefits. For instance, the co-presence of alternative standpoints i