Monday, January 19, 2009

Our Lewisian paradise

So-called ‘modern’ philosophy began with Bacon and Descartes, but it all looks pretty dated (if sometimes deep) up until around 1973, when David Lewis gave analytical philosophy the standard PW (possible worlds) analysis of counterfactual truth. A comparable event might be the giving by Cantor (according to Hilbert) of a set-theoretical paradise to standard maths (which is less than ideal if one is applying the maths in theoretical physics, I think). What PW talk gives philosophers, it seems to me, is a new way to argue fallaciously, rivalled only by the use of statistics in politics. But anyway, here is Lowe’s (2006: 12) succinct description of Lewis’s analysis:
A counterfactual of the form ‘If it were the case that p, then it would be the case that q’ is said to be true if and only if, in the closest possible world in which p is the case, q is also the case – where the ‘closest’ possible world in question is the one in which p is the case but otherwise differs minimally from the actual world.
Lowe followed that with what would, until relatively recently, have been a stunningly fallacious argument against mental physicalism, all wrapped up in PW talk: a gift to any intelligent physicalist, who’s thence able to refute an objection to her position that comes with all the modern trappings of the authority of modern philosophy. But to step back a bit, what’s wrong with Lewis’s analysis? To begin with, subjunctive talk equivocates like anything. And then there’s the problem of saying what is, in the relevant way, possible; a problem Lewis solved in an implausibly Humean way, which was at least elegant in a principled way, if evidently false. And of course, what is to count as ‘close’?
......Suppose I’m trying to tell you something, and I know (since I know what it is) that you’ll find it hard to understand what I’m saying. I might say, ‘If you knew what I was trying to tell you, you’d know how difficult this is.’ But of course, if you did know then it would be trivially easy to tell you about it. Perhaps I meant that after I’d told you, and you’ve understood me, you’d agree that it was difficult to tell you. But suppose it’s so hard to tell you that you never do get it. Does my meaning really depend upon which of the possible worlds in which I’ve told you is most like the actual world, in which you didn’t get it?
......What if the difference is just a few neurones that you were born with, for example, but that those neurones also make it hard for you understand why it was difficult to tell you (since you then find such things so obvious)? What if lots of things; and so basically, how could all that really affect the meaning—and hence the truth—of what I’m saying? After all, we do seem to have got bogged down in an awful lot of fallacious arguments and counter-argumetns since the 1970s; which may be ideal for professional philosophers in a stupid economy, but less so for those applying logic to the real world.

11 comments:

Alrenous said...

But suppose it’s so hard to tell you that you never do get it. Does my meaning really depend upon which of the possible worlds in which I’ve told you is most like the actual world, in which you didn’t get it?

I would just slightly redefine 'hard.' Something that is impossible is maximally hard, whether we consider PWs or not. If you then empirically determine that I cannot get it, then it's time to assume it's impossible and reason accordingly, and thus maximally hard.

On the other hand, it's interesting that the human brain can entertain self-inconsistent concepts.

In fact, it's trivial.

In further fact, I've tried hard to eradicate this capability in myself. (It turns out the solution is simple, yet with many enlightening side-effects.)


In other news I'd like to ask you about aleph-0 and aleph-1. Can I have your email address or can I give you mine? (Or we could even discuss it here, or not at all. Not fussed.)

Enigman said...

I don't see how redefining 'hard' could possibly achieve anything. Sometimes, as science progresses, it's necessary to introduce technical terms. And we often need to precisify our terms. But how does redefining the ordinary words of modern English offer to help us do anything? I just don't get that.

I was just thinking of it being hard for me to tell you something because I was not very good at explaining such things to people like you, with all of that pretty broad, and naturally vague. And it just seems obvious to me that the details of why are normally going to be irrelevant to my meaning.

Self-inconsistent concepts - e.g. if Goldbach's conjecture is true, then that there are natural numbers for which the conjecture is false? I suppose one entertains such a concept if one searches quite hard for such numbers, even if one does that in order to use how one fails as a clue to a proof of the conjecture's truth...

...I like maths (much more than philosophy), so ask away about the alephs. I don't believe they're consistent, nor that they're inconsistent, but I do believe they're (probably) not realistic numbers, and in becoming more sure of that I've read a bit about them, but not as much as I should if I was going to teach them.

My email address is at the bottom of the "other stuff" page that my index page (on the left of this blog) links to, but it's easier if you just ask below. Nothing else is happening on this blog (your mysterious question is the most exciting thing to happen here for a while :)

Alrenous said...

Basically, I realized something.

The average real number has an aleph-0 number of digits.

Nobody had ever told me this, it just occurred to me. My real question; do mathematicians make a habit of not thinking about this?

Once I realized this, I started wondering why the continuum hypothesis hasn't been proven yet. The real numbers have a very infinity squared flavour - they're aleph-0 deep and some infinity wide.

Now, this will be qualitative, but I learned while taking physics I'm actually quite good at doing math qualitatively. It isn't, of course, really rigorous, which is why I'm asking you.

The shortest possible description of any integer is always finite. No matter how large an integer you choose, I can always write it down in a finite amount of time. This means to write down all integers with the smallest number of characters necessary, you will end up writing down aleph-0 characters; it's a normal infinite sum of n->∞ of finite elements.

However, to write down even one (average) real number takes aleph-0 characters. To write them all down with the fewest necessary characters is to write down a total of an infinite sum of aleph-0s, something which is (I think) equivalent to the normal aleph-0^aleph^0 definition of c.

Of course this doesn't directly prove anything. Sure, it may take that many to write them all down, but that's not directly what we're interested in. We just want to know how many there are.

Here's where I get unsure again. I think that if we could replace each real number with a marker, which we count, that would contradict the assumption that the shortest number of characters necessary to write one down is aleph-0.

Problem is that's clearly not any kind of obvious. (It is also the exact kind of problem one expects when doing math or physics qualitatively.)

Re-reading what I've written, it would seem that if, for example, you try to build an aleph-0 set out of elements with aleph-0 elements, you won't actually need all aleph-0 parts of the elements because if you try you will run out of space in aleph-0. You could put them in 1-to-1 correspondence with integers and, somehow, define a function, which won't require aleph-0 characters to write down.

In other words, if you have real numbers or indeed any object requiring aleph-0 symbols to fully represent, you cannot build a full set of them without summing the elements to an infinity one bigger.

Similarly, if you have a set of aleph-1 elements, you should have to go up to aleph-2 to get a full set.

Again, I'm still several steps short of a proof, but if I can get this far, it seems very odd to me that (if) trained mathematicians haven't gotten much, much farther.

Of course I may just be fooling myself. So, what do you think?

Actually I want to try writing it down one more time.

A real number has aleph-0 degrees of freedom. I think this is the essence of Cantor's diagonal; you can get to aleph-0 other numbers by only changing a finite number of elements of any real number. Ergo the smallest possible size of a set of aleph-0 elements is c.

And here I relate it back to the shortest possible way to write down all real numbers to 100% accuracy.

Alrenous said...

But how does redefining the ordinary words of modern English offer to help us do anything? I just don't get that.

English as used by ordinary people is an approximation. If you're going to logically examine it, you can't get very far, any more than you can reproducibly figure out where sunspots are on a blurry photo of the sun.

If you really want to be rigorous you have to guess at what they were trying to approximate and analyze that instead. Then, look at the result and see if it makes sense in light of the approximation. If not, guess that they were approximating something else and try again.

So, basically, could by 'hard' we mean 'like impossible?' The answer is yes - this is consistent with the approximations used in ordinary English.

It doesn't have to reference other worlds at all. And indeed, this definition is simply another guess at what ordinary English is approximating. Ultimately it truly means neither of these things - ordinary English approximates logic but it doesn't approximate itself. The way I remind myself of this is to imagine that English is really just a set of symbols for emotion. It's not logical or illogical per se, it just exists.

I'm not sure I understand what you're getting at with the bit about Goldbach's conjecture.

Enigman said...

Regarding English, I don't see modern English as an approximation (although it's vague enough), as I don't see what it could be an approximation to. We form sentences to communicate, and we successfully communicate when we get roughly the right idea across. What's important seems to be the overlap between the two (or more) thoughts associated with the sentence by the communicants. The idea that there was something more precise that language was trying to capture is, I think, fallacious. We often need to make our expressions more precise, to communicate effectively, but that's subtly and profoundly different, I think.

I'm no linguist, so I don't know what else to say, but for an example, if I ask you to make me a cup of tea, I have all sorts of thoughts there. E.g. there's what "a cup of tea" means to me, and there're my expectations about your possible reactions to my request (and there's my tone of voice and such too), etc. And "a cup of tea" might mean something different to you (e.g. a different range of flavours and such), and your social expectations might be different (e.g. via your different upbringing). But much of that complexity is nothing to do with language.

There is a philosophical question about what "a cup of tea" is, which is sort of ontological, but if the answer turns out to involve inherent vagueness I won't be too bothered or surprised. My philosophical interests are more scientific, e.g. what does "a real number" mean. I think that mathematicians have done a lot of work in that area, throughout the last century, about which I'm no expert. I think that simple infinities (which would have size aleph-0 if I'm wrong) are indefinitely extensible. Consequently there are only as many real numbers that can be completely written down. But that's a pretty outlandish view these days, for a Platonist like myself.

I imagine that mathematicians tend to view such questions formalistically nowadays. In some axiomatic structures there's one answer, in others another, and in some there may be no answer (it must be added as another axiom if you want it). So if you want to ask about the real numbers, you first have to say exactly what you mean by "the real numbers" (Wittgenstein seemed to get his constructivism from the inherent vagueness of language, but I could well be totally wrong about that:) And the main complicating factor that I find in this area is that "the real numbers" does have a standard sense, which is given by the axioms of standard ZFC set theory.

I find that complicating because I don't think those axioms are a true (never mind complete) description of arithmetic (the structural properties of individuals as individuals). So I suspect that the continuum hypothesis hasn't been proved because those axioms require another axiom to yield the answer. A good paper on that is Does Maths Need New Axioms?

Alrenous said...

Right, sorry about that. You answered me and then I was poleaxed by a cold.

English approximates truth. Truth is much more rigid than most English sentences, but the rigidity is unnecessary in daily life.

Indeed the entire point is overlapping thoughts, but what's the point if your thought is inherently false? Basically, it's fine as long as it approximates something that isn't. The precision comes in when the approximation is no longer living up to snuff. We refine it until it works again and then carry on as before.

Again, all this hinges on the human brain's capacity to process inconsistent thoughts, spitting out something that doesn't usually blow up in the face of reality.

A computer would have to make do with explicitly acknowledging the insufficient information, and indeed many of the stupider things robots do are because they can't do either the human thing or the computer thing.

I may be playing devil's advocate here. I'll find out soon, I'm sure.



I should mention that if language really is inherently vague, I should have run into that by now. But aside from some glaring terminology holes, I don't have any problem with it. If I need more precision I can just use more words until I get enough.

There is a problem with the fact that for your thoughts to overlap mine you have to already have all the necessary primitives. But then that just makes babies more amazing.


Since my training is in physics, there's only one real numbers; the one that leads to the particular fancy maths that physicists use. I often forget that's actually just one of them. :)

But yes I completely missed the axiom angle. It does seem that math's axioms are a bit of a mess right now.

In fact I'd say the same thing about philosophy. I'd also boldly state that this isn't a coincidence, but that's really an entirely different topic.

I should mention explicitly that you've answered my question.

Enigman said...

Hi, I'd assumed it was something like a cold, but was wondering if I'd answered your question. On language, I find it absurdly complicated to think about, so thanks for your thoughts for me to ponder upon some more...

Enigman said...

English approximates truth.
But truth is ordinarily the correspondence between our words and the world. What I say is true insofar as the thought I communicate through my words is realistic. There is then an analogical notion of truth, concerning the fit of our thoughts (conceived of as pictures in our brains) to the world (conceived of as outside our brains), but the literal truth of a sentence (e.g. the cat is on the mat) doesn't involve that so much as the cat being on the mat.

Maybe English approximates that analogical truth. But my thought that the cat is on the mat, and what I mean by the reality of the cat being on the mat are, when my thought is true, well nigh identical. By "the real world" I mean the world as pictured in my head, together with some notions of an underlying reality that are also in my head.

By vagueness I just mean that words are rough and ready tools, all of which might be clarified into something more definite, were circumstances to make that wise, whence all of them are a little vague. You wouldn't notice it until you got into a philosophically paradoxical circumstance. Our words are definite enough for ordinary situations, and when they aren't we make them so.

But there's also the complication that the literal meanings of our words are objective, insofar as they're the meanings of words, and yet also subjective insofar as they're meanings, i.e. our thoughts, in our heads.

Basically, like with real numbers, there're lots of conceptions of, and some philosophical theories of, truth. There probably is a deep connection with maths, as well as the superficial ones (via Frege and Russell, Tarski and Kripke, etc.).

Alrenous said...

Ah, so you're more concerned with the match between your words and your ideas, rather than your words and the world.

Is that a fair oversimplification?


But there's also the complication that the literal meanings of our words are objective, insofar as they're the meanings of words,

I'm not sure I understand this bit. (The correspondence between your ideas and mine is probably low.)

It seems to demand that I point out that encodings are arbitrary, and that the only way to disambiguate is to consult a subjective authority.

But then I have to notice that the encoding is arbitrary but the meaning isn't. If you're doing physics, only one structure fits, and if you choose the wrong meaning you're not examining physics anymore.
If you're doing math, the math you're doing derives from the meaning, not the symbols, and if you try to use subjective authority to disambiguate something you'll either be just wrong or start examining a different math than the one you think you're examining.

So, yeah I think I don't understand. Come again?

Nevertheless I like this train of thought. It seems that the encoding is inherently subjective yet the meaning (once an axiom is chosen) is objective.

But then I realize I'm equivocating on 'meaning' somewhat. There's two; logical meaning and encoded meaning. The first is the logical consequences of the chosen axioms. The second is the fact that to any particular subjective authority, a particular encoding will bring to mind a particular concept. This second clearly doesn't have to be consistent with anything separate; it's not objective. It only can become wrong if I use a word like 'hydrogen' to mean something different than everyone else, and then when they prove that oxygenating hydrogen produces water, I try to disagree and say no, it produces laughing gas.

And even then, I'm not wrong for thinking of what they encode as 'nitrogen' when I read 'hydrogen' I'm simply wrong in thinking they use an identical encoding.

Though a consequence of this is that their logics aren't consistent under my encoding, which is, I realize, one of the ways you can tell when your encoding isn't matching.

And indeed what I used above to find that I probably don't understand your bit above. On the other hand, it got me thinking, so good job there.

(Open letter to self; I'm not always capable of doing such thoughts, of running away with something and I honestly don't see the point of commenting when I can't, for many reasons. This is the source of most delays in responding.)


I agree; there's almost certainly a deep connection with maths.

Enigman said...

Hi, yes, that's a fair oversimplification. I have a thought (about the world (as I see it)) and try to express it in words, and so I come up against the fact that the literal (public) meaning of the words is something that I'm guessing at even more so than I'm guessing at the objective nature of the world. And then I notice that the literal meaning is something that has evolved to be no more definite than it had to be, that it allows lots of subjective variation, and intersubjective vagueness, and that the versatility of natural language even relies on it allowing that...

So I have this definite thought about the world (e.g. that the cat is on the mat), because I learnt English as a baby, and because of some pre-linguistic observation (e.g. that animal of that kind is on that thing (which I associate with this stuff)), and that thought is, I guess, part of the literal meaning of my expression of it ("the cat is on the mat") simply in virtue of my using words that way (as one of the English-speaking people)... (I'm as confused by what I'm writing as you are, if that's any comfort:)

Alrenous said...

Ha! That's good then, because I know exactly what you're getting at.

The difficulty is in talking about encoding when you only have encodings to talk with. Luckily the human brain is good at parsing equivocation, again by comparing the details to the overall structure.

But anyway, weren't we trying to apply this to the idea of counterfactuals? Having worked out a shared encoding, we can now communicate successfully.