Degrees of Belief

Full Title: Degrees of Belief
Author / Editor: Franz Huber and Christoph Schmidt-Petri (Editors)
Publisher: Springer, 2009

Buy on Amazon

 

Review © Metapsychology Vol. 14, No. 2
Reviewer: Andrei Marasoiu

Degrees of Belief edited by Franz Huber and Christoph Schmidt-Petri is a collection of articles devoted to logical and probabilistic models of partial beliefs, i.e., beliefs which we hold to a certain degree, beliefs of which we are not necessarily certain, but partially believe them nonetheless. This collection is intended as a comprehensive survey of the work done on belief revision (i.e., how our partial beliefs change in time) and also contains some novelties in the literature. The intended audience is supposed to be familiar with propositional logic and with probability theory, but no previous acquaintance with the topics discussed in the book is required.

Propositional logic is necessary because the objects of belief are identified with propositions, and probability theory is handy because the traditional way of modeling degrees of belief has been by means of subjective probabilities. According to this approach, holding a partial belief is a relation between an epistemic agent (typically, a human being), a proposition and a probability (the degree to which the agent believes the proposition). Franz Huber’s introductory chapter is an excellent guide through the rest of the book. An overall assessment reveals a good balance between appeal to commonsensical intuitions concerning belief and formal constructions.

The collection is divided into three parts. The first part, Plain Beliefs and Degrees of Belief, focuses on the relation between flat-out (or plain) beliefs and partial beliefs. One important question in this area is whether the Lockean thesis, which claims that when the degree of a partial belief is high enough, that belief should count as a plain belief, is true or not. Subsequently, supposing it true, how high should the threshold of partial belief be fixed so that it correspond to plain belief? These questions preoccupy the contributions of Richard Foley, James Hawthorne and Keith Frankish.

The second part, What Laws Should Degrees of Belief Obey?, is devoted to quantitative approaches to partial belief, approaches which seek to measure the degree of belief. By way of contrast, the third part, Logical Approaches, deals with qualitative approaches to belief, according to which the strength with which a certain proposition is believed is determined by means of a relation similar to „agent x believes proposition y more than she believes proposition z“. Are qualitative approaches better than quantitative ones? This sounds plausible, because we do not daily measure our belief in the propositions we hold to be more or less true, but quantitative approaches also have qualitative counterparts.

If the phenomena associated with believing had one ultimate and coherent account, there would be no need for so many alternative approaches. However, there is a very good reason why these alternatives exist: paradoxes emerge (the most serious one is the paradox of the lottery), and more than one solution can be offered. Here is a very sketchy account of the paradox of the lottery. Suppose you buy a ticket in a lottery of 1000 tickets. Although the objective chances of your winning equal 1 per 1000 (and the sum of all these equal chances is 1), you will ordinarily expect to gain nothing, i.e., your subjective probability will be 0 instead of 1 per 1000. But if this is the typical behavior of epistemic agents, then subjective probabilities (0+0+…) do not add up to 1, as the objective ones  (1/1000+1/1000+…) do. This is known as the problem of additivity. Why should additivity be endorsed? Because it is an axiom of classical probability theory, according to which if two events A and B are possible, then if A and B are independent of one another, the probability of either of them happening together is the sum of their individual probabilities. There are two solutions: either solve the paradox of the lottery within probability theory, or find an alternative theory to solve it.

Replacing the additivity axiom in probability theory yields patterns of non-monotonic reasoning, for which David Makinson’s contribution is a good overview. The two obvious solutions are either endorsing sub-additivity or super-additivity. Sub-additivity is the option incorporated in possibility theory (see Dubois and Prade’s contribution), according to which, given the necessity measures for two events A and B, the necessity measure of either of them happening is the minimal necessity measure, as the case is,  of A or of B. Super-additivity is achieved by means of DS (Dempster-Shafer) belief functions (see Rolf Haenni’s contribution): if two events A and B are independent, then believing that either them of happens is stronger than believing that A happens or believing that B happens.

Additivity (in subjective probabilites), sub-additivity (in possibility theory) and super-additivity (in DS functions) exhaust the possibilities. The fourth quantitative approach, ranking theory (see Wolfgang Spohn’s contribution, also the proponent of ranking theory in the 1980s) combines the advantages of the other three.  One such advantage, due to Hans Rott’s article, is a unified account of degrees of belief, disbelief (belief that something is not the case) and degrees of acceptance (considering how plausible something is while remaining ignorant as to its truth).

But probability theory also has resources to cope with the lottery paradox, as shown by the contributions of Colin Howson and Brian Skyrms. Skyrms appeals to convex sets of probabilities. In order to make his idea intuitive, Skyrms draws an analogy between the expected prices on a market and the expected probability of a belief. Colin Howson sticks to first-order probabilites, but identifies two ways of viewing probability theory, one due to Keynes and Carnap, and the other due to Ramsey and de Finetti. He argues that endorsing the views of the latter is a good argument in favor of probabilism (the thesis that belief revision obeys the probability calculus).

Given the weak conclusion that probabilism remains a viable option among many, is it also possible to show that probabilism is true? This is the import of Dutch book arguments, defended by J.Joyce and considered non-valid by Alan Hajek. A Dutch book is a series of bets each of which seems beneficial but the final result of which is costful for the epistemic agent. The argument sustained by Joyce and questioned by Hayek is that if an epistemic agent does not bet according to the probability calculus, she can be Dutch booked. However, even if Dutch book arguments are not valid, it is still the case that betting behavior provides a straightforward intuitive support to probabilism, at least a more cogent one than any intuitive representation of how possibility theory, DS functions or ranking theory work.

 

© 2009 Andrei Marasoiu

 

Andrei Marasoiu is currently a student in a Master’s Program in the History and Philosophy of Science at Bucharest. His main area of specialization is the philosophy of W.V. Quine.