Logic has very different roles in mathematics and in real life. In mathematics, the premises are certain, and thus proof provides conclusions which are certain also. In real life, the premises are assumptions, which one makes with varying and revisable degrees of confidence. Proof creates dilemmas, forcing one to use one's judgment to chose between accepting a troubling conclusion, and reconsidering one's weaker assumptions.
but how do I revise my degrees of confidence? How do I chose between accepting troubling conclusions and reconsidering my weak assumptions? That is a lacuna in my thought. I have no answers, no methods, but muddle through as best I can, perhaps withstanding comparison because others are in the same boat.
Chapter one takes up the challenge of plausible reasoning, making it clear that logic is not enough. To get by in day to day life we make use of plausible reasoning coping as best we can when things go wrong. And go wrong they must. The uncertainties that prevent the use of logic occasionally conspire against us producing bad outcomes by bad luck. The lack of method sometimes leads us to muddle through to an implausible conclusion where a plausible one might have been found had we more skill in the art of conjecture. Chapter one ponders logic and what qualitative considerations must constrain any attempt to extend it beyond the certain to embrace the plausible.
Chapter two is highly mathematical. It turns the constraints from chapter one into functional equations and solves them. A key question is: what functions are associative, that is, solve f(x,f(y,z)) = f(f(x,y),z). Chapter two shows that "plausible reasoning" is unique. There is only one way to do it.
There are two kinds of dramatic tension in Chapter Two.
First is the tension associated with the proof of a difficult theorem. The considerations in Chapter One looked rather trivial. Necessary, I grant you, but surely not sufficient to pin down the quantitative details of plausible reasoning. So how is the proof going to work?
Second is the tension that comes from already knowing part of the answer. Games of chance force us to do a form of plausible reasoning. We see the whole of our poker hand. We see some of the other players' cards, face up on the table. How do we bet?
For most of the twentieth century probability has been seen as inhering in certain objects. Dice have 6 faces. Roulette wheels 36 slots. Uncertainty is conceptualized as the unknowablity of the future in those special circumstances in which identical circumstances do not lead to the same outcome. We do not know what number will show for the die is not yet cast.
We see the probability of 1/6 as being a property of the die, which has yet to make up its mind. The 1/6 is not in our mind, reflecting the limit of our knowledge. Sometimes the die rolls under the bed. It has made up its mind, but we cannot see. The 1/6th can only be in our head, but we stick to randomization devices and such as dice and roulette wheels and decks of cards and tolerate these brief episodes of cognitive dissonance.
We have a well worked out and definite theory of plausible reasoning, and we restrict it to trials that we can repeat as often as we wish, letting us count the varied outcomes and compute their relative frequencies. How likely is the Riemann Hypothesis? It is either true or false. We do not know but it is one or the other, and trained to stay within the confines of logic we feel guilty if we ponder whether it is likely or not.
If the proof merely shows the uniqueness of plausible reasoning then we know what the quantitative rules must be, they must be the rules we have already learned when we studied probability theory. Will these rules extent beyond their established domain? Will we be tantalised by a unique extension, proved unique, but not constructed?
As the proof progresses, solutions to the equations are found, and the proof, of Cox's theorem, is revealed to be constructive in nature. Cox doesn't just prove that there is only one way to do it, he shows what that way is.
The tension mounts. Either it will agree with probability theory or it won't. Either way would be awesome.
If it doesn't agree with probability theory that is a major breakdown in mathematical logic. That would be a bit too awesome. It is more likely that there is a mistake in the proof. Notice the use of plausible reasoning in the previous sentence. What the fuck does "likely" mean? The proof either contains a mistake or it doesn't.
If it does agree with probability theory that is also seriously awesome.
Remember when you proved the law of large numbers using Chebyshev's Inequality. The requirement that the distribution have a finite variance seemed so necessary. Later, you proved the law of large numbers using generating functions and only needed a finite first moment. It was such a shock to discover that the second moment was needed for the simplicity of the simple proof, but not for the truth of the theorem.
Agreeing with probability theory would be the same thing, repeated on a much grander scale. Randomisation devices would be exposed as red herrings. Probability is in the mind and arises whenever we don't know, even if there is no repetition and no frequency.
Which awesome outcome closes Chapter Two? Cox constructs laws of plausible reasoning that are the same as probability theory.
Probability theory gives us a rule for updating the likelihood of an issue that concerns us when we get new information to add to the old.
P(concern|old, new) = P(concern, new|old)/P(new|old)
This turns out to be the rule for plausible reasoning regardless of whether the uncertainty arises from ignorance or from chance. Which is just as well, because the distinction only worked if the dice never rolled under the bed.
The mathematics gets a lot easier in Chapter Three, but I need to go back to my essay and add a postscript on the implications of implication.
|< The Colour Out of Space | update >|