# A refinement of the distinction between risk and uncertainty: fundamental uncertainty

Rationality may be compatible with a variety of uncertainty assessments, which often reflect different ways of structuring the space of events. Acknowledgement that identification and representation of the space of conceivable events is a necessary condition of plausible reasoning under uncertainty suggests a refinement of the distinction between risk and uncertainty (see section 1.1). In particular, as argued by Itzhack Gilboa and David Schmeidler, it may be reasonable to introduce

a third category of *structural ignorance:* 'risk' refers to situations where probabilities are given; 'uncertainty'—to situations in which states are naturally defined, or can simply be constructed, but probabilities are not. Finally, decision under 'structural ignorance' refers to decision problems for which states are neither (i) naturally given in the problem; nor (ii) can they be naturally constructed by the decision maker.

(Gilboa and Schmeidler, 2001, p. 45)

This conceptual setting is close to that considered by Keynes in his *Treatise on Probability (TP)* (Keynes, 1973 [1921]). There Keynes examined conditions for rational choice when probabilities may not be known and/or could be expressed only in a non-numerical way. Keynesian uncertainty is a setting in which rational decisions are possible but depend upon highly *distinctive* bodies of information depending on situation and context (see Marzetti Dall'Aste Brandolini, Chapter 12, this volume; see also Marzetti Dall'Aste Brandolini and Scazzieri, 1999, Introduction and pp. 139-88).^{2 }In this case 'The principles of probability logic need not mandate numerical degrees of belief that *a* and that *ab* on the evidence *h* but only that agent *X* is required to be more certain that *a* than that *ab.* According to Keynes, probabilities of hypotheses on given information could even be non comparable' (Levi, Chapter 3, section 3.3, this volume; see also Kyburg, Chapter 2, this volume). Keynes's discussion of the weight of argument (and of the associated issue of the weight of evidence) calls attention to the reliability of human judgement under fundamental uncertainty. In particular, it suggests a way to assess the influence of evidence in attaining a plausible (but not uncontroversial) inference ('proximity to proof').^{3 }It also allows a formal treatment of unexpected events (potential surprise or disbelief).^{4} As some contributors to this volume point out (see Levi, Chapter 3; Vercelli, Chapter 7), this point of view opens up a whole set of new issues, since consideration of the weight of evidence 'turns upon a balance, not between the favourable and the unfavourable evidence, but between *absolute* amounts of relevant knowledge and of relevant ignorance respectively' (Keynes, 1973 [1921], p. 77). In particular, assessment of the degree of relevance calls attention to the fact that '[w]here the conclusions of two arguments are different, or where the evidence for the one does not overlap the evidence for the other, it will often be impossible to compare their weights, just as it may be impossible to compare their probabilities' (Keynes, 1973 [1921], p. 78; see also Runde, 1990).

Fundamental uncertainty calls attention to the role of mental frames in assessing evidence and guiding rational decisions. Probability is associated with degree of rational belief, but different probabilities are not always comparable (primarily because their weights may be different). It may be argued that probabilities 'lie on *paths,* each of which runs from 0 to 1' (Kyburg, Chapter 2, section 2.1.3, this volume). Indeed, the same numerical probability could have entirely different implications for rational choice depending on the weight attached to it. To put it in another way, the same information could be associated with different probabilities depending on the weight we attach to available evidence. In particular, 'probabilities are only partially ordered: two probabilities may be incomparable. The first may be neither greater, nor less than, nor yet equal to the third' (Kyburg and Man Teng, 2001, p. 80). The existence of different 'orders of probability' (Keynes) makes *switches *between probability orders conceivable. This in turn makes the relationship between degrees of rational beliefs and available information to be of the non-monotonic type. Further evidence could initially increase our confidence in a given hypothesis and then prompt us to withdraw it, either because we have shifted to a different order of probability or because new background knowledge has drastically reduced the weight of our evidence. There is an important connection between the cognitive demands of fundamental uncertainty and the idea that probability judgements are always relative to a certain state of mind. This property is clearly stated in Chapter 1 of Keynes's *TP:*

when in ordinary speech we name some opinion as probable without qualification, the phrase is generally elliptical. We mean that it is probable when certain considerations, implicitly or explicitly present to our minds at the moment, are taken into account [...] No proposition is in itself either probable or improbable, just as no place is intrinsically distant; and the probability of the same statement varies with the evidence presented, which is, as it were, its origin of reference.

(Keynes, 1973 [1921], p. 7)^{5}

This point of view is closely related to the formalization of fundamental uncertainty in terms of conditional probability, a possibility explored in a number of contributions to this volume (see Costantini and Garibaldi, Chapter 8; Fano, Chapter 4; Kyburg, Chapter 2). Keynes called attention to the 'coefficient of influence' (Carnap's 'relevance quotient' as introduced in Carnap, 1950), which may be defined as a measure of the relevance of additional evidence for the degree of acceptance of any given hypothesis (see Costantini and Garibaldi, Chapter 8, this volume). In general, additional evidence has more or less impact upon the degree of acceptance of any given hypothesis *H* depending on whether the weight of argument leading to its acceptance (or rejection) is increased or reduced. For example, we may conjecture that additional evidence would have greater influence upon acceptance/rejection of *H* if the weight of the corresponding inductive inference is increased. In other words, the coefficient of influence provides a link between the epistemic and the ontic aspects of probabilities. This is because the coefficient of influence is related *at the same time* to the degree of rational belief in hypothesis *H* for any given evidence *e* and to the way in which stochastic interdependence influences the structure of observations (that is, the internal configuration of e). Given a certain amount of new evidence e* it is reasonable to conjecture an inverse relationship between the degree of stochastic interdependence and the weight of inductive inference. For a high degree of interdependence makes the configuration of *e* unstable and reduces the likelihood that new evidence *e** will be conclusive with respect to the given hypothesis. On the other hand, a low degree of interdependence makes the configuration of *e** more stable and increases the weight of inductive inference.