Frederick Douglass, Narrative of the Life of Frederick Douglass; Booker T. Washington, Up from Slavery

*Introduction*
*Opening Passages:* From Douglass's *Narrative*:
I was born in Tuckahoe, near Hillsborough, and about twelve miles from
Easton, in Talbot c...
5 hours ago
3 comments:
That's a really clever idea.
I skimmed through the first half or so, and so if the answer to my worry is in the second half, I apologize.
As a probabilist, I am very much worried about conditioning on zeroprobability events. One easily get in trouble that way.
However, I can do better than just this dire warning of "getting in trouble". Instead I can offer a challenge: Explain what you mean by these conditional probabilities.
You take a single case objective propensity view of probability. That is all fine and good for single cases. But that doesn't yield an account of enough conditional probabilities. One can use objective propensities to understand some conditional probabilities, namely those where one is conditioning on initial conditions. But here you're not conditioning on initial conditions but on outcomes. And one can use nonconditional probabilities to define conditional probabilities when one is conditioning on an event of nonzero probability, but here you're conditioning on zeroprobability events.
Now, it is true that mathematicians do sometimes condition on events of zero probability. Thus, we can sometimes make sense of P(EA=x) where E is an event, x is a constant and A is a random variable, even if P(A=x)=0. But this must be done carefully, using the RadomNikodym Theorem. And the RadomNikodym Theorem only yields a function that is unique up to sets of probability zero. In other words, we can sometimes define the function f(x)=P(EA=x), but the function f will not be uniqueany other function that differs from f only on sets of probability zero will also do the job. Consequently, this isn't really a good definition of P(EA=x) for any particular value of x, but only a good definition of a class of functions each of which "counts as" P(EA=x). But for your purposes you need a good definition of P(EA=x) for a particular value of x.
Many thanks, but I don't understand why conditioning on 0probability events should be a problem. My conditional probabilities arise from many singlecase propensities, the restriction being to certain possible outcomes as you say. But the restrictions are justified by there being realistic scenarios involving sufficiently similar restrictions.
One guy may get an Integer, and if he does he wonders whether the other guy's Integer, if there is one, is likely to be bigger. He wonders nothing otherwise; but the event is possible, so he may so wonder. And then he is wondering about one of these conditional probabilities. In such a possible world, why not? Maybe there is no such numerical probability, or maybe there is.
But there won't be one and not one, if this world is possible. So I don't see why my argument needs a better definition than that (intuitive one). E.g. I don't assume that there are numerical probabilities, and I argue for implausibility not impossibility. I agree that it would be difficult to get a stronger result. Maybe I'd have to develop my own theory of probability, but then my resulting contradiction would just refute my own theory. But it may be a hopeful sign (for me) that the standard theory does encounter difficulties in this area.
...or how about this analogy: Objectively fair cointosses might arise from the aggregate of the underlying indeterminism at the atomic level. A great many outcomes are possible, most of them not involving coins being tossed at all. So to say that were there 2 of them, the chances of 2 heads would be 25% is to condition on unlikely outcomes. But intuitively it would not matter how unlikely fair cointosses were, so long as they were independent, and the underlying probabilities were singlecase propensities.
Post a Comment