## Friday, May 02, 2008

### Happy Talk

The final bit of my tryptich, a modern-physical instantiation of Levy's Paradox, will be appearing (in a graduate session) at the Joint Sessions in Aberdeen in July (dream come true :-)

Alexander R Pruss said...

That's a really clever idea.

I skimmed through the first half or so, and so if the answer to my worry is in the second half, I apologize.

As a probabilist, I am very much worried about conditioning on zero-probability events. One easily get in trouble that way.

However, I can do better than just this dire warning of "getting in trouble". Instead I can offer a challenge: Explain what you mean by these conditional probabilities.

You take a single case objective propensity view of probability. That is all fine and good for single cases. But that doesn't yield an account of enough conditional probabilities. One can use objective propensities to understand some conditional probabilities, namely those where one is conditioning on initial conditions. But here you're not conditioning on initial conditions but on outcomes. And one can use non-conditional probabilities to define conditional probabilities when one is conditioning on an event of non-zero probability, but here you're conditioning on zero-probability events.

Now, it is true that mathematicians do sometimes condition on events of zero probability. Thus, we can sometimes make sense of P(E|A=x) where E is an event, x is a constant and A is a random variable, even if P(A=x)=0. But this must be done carefully, using the Radom-Nikodym Theorem. And the Radom-Nikodym Theorem only yields a function that is unique up to sets of probability zero. In other words, we can sometimes define the function f(x)=P(E|A=x), but the function f will not be unique--any other function that differs from f only on sets of probability zero will also do the job. Consequently, this isn't really a good definition of P(E|A=x) for any particular value of x, but only a good definition of a class of functions each of which "counts as" P(E|A=x). But for your purposes you need a good definition of P(E|A=x) for a particular value of x.

Enigman said...

Many thanks, but I don't understand why conditioning on 0-probability events should be a problem. My conditional probabilities arise from many single-case propensities, the restriction being to certain possible outcomes as you say. But the restrictions are justified by there being realistic scenarios involving sufficiently similar restrictions.

One guy may get an Integer, and if he does he wonders whether the other guy's Integer, if there is one, is likely to be bigger. He wonders nothing otherwise; but the event is possible, so he may so wonder. And then he is wondering about one of these conditional probabilities. In such a possible world, why not? Maybe there is no such numerical probability, or maybe there is.

But there won't be one and not one, if this world is possible. So I don't see why my argument needs a better definition than that (intuitive one). E.g. I don't assume that there are numerical probabilities, and I argue for implausibility not impossibility. I agree that it would be difficult to get a stronger result. Maybe I'd have to develop my own theory of probability, but then my resulting contradiction would just refute my own theory. But it may be a hopeful sign (for me) that the standard theory does encounter difficulties in this area.

enigMan said...

...or how about this analogy: Objectively fair coin-tosses might arise from the aggregate of the underlying indeterminism at the atomic level. A great many outcomes are possible, most of them not involving coins being tossed at all. So to say that were there 2 of them, the chances of 2 heads would be 25% is to condition on unlikely outcomes. But intuitively it would not matter how unlikely fair coin-tosses were, so long as they were independent, and the underlying probabilities were single-case propensities.