Le 17-mai-05, à 09:06, Stathis Papaioannou a écrit :
I agree with Lee's and Jonathan's comments, except that I think there is something unusual about first person experience/ qualia/ consciousness in that there is an aspect that cannot be communicated unless you experience it (a blind man cannot know what it is like to see, no matter how much he learns about the process of vision). Let me use the analogy of billiard balls and Newtonian mechanics. Everything that billiard balls do by themselves and with each other can be fully explained by the laws of physics. Moreover, it can all be modelled by a computer program. But in addition, there is the state of being-a-billiard-ball, which is something very strange and cannot be communicated to non-billiard balls, because it makes absolutely no difference to what is observed about them. It is not clear if this aspect of billiard ball "experience" is duplicated by the computer program, precisely because it makes no observable difference: you have to be the simulated billiard ball to know.
Before someone says that billiard balls are not complex enough to have an internal life, I would point out that neither is there any way to deduce a priori that humans have conscious experiences. You have to actually be a human to know this.
You don't need to postulate a special mechanism whereby mind interacts with matter. The laws of physics explain the workings of the brain, and conscious experience is just the strange, irreducible effect of this as seen from the inside.
I agree. Let me make a little try to explain briefly why I think that, although the first person cannot be reduced to any pure third person notion, yet, it is possible to explain in a third person way why the first person exists and why it cannot be reduced to any pure third person way.
I will consider a machine M. The machine is supposed to be a sort of mathematician, or a theorem proving machine in arithmetic. I suppose the machine is sound: this means that if the machine proves some proposition p, then p is true. I will also suppose that the machine is programmed so as to assert all the propositions she can prove, in some order. So one day she proves 1+1=2. Another day she proves 17 is prime, and so on. So I will use "M proves" and "M can prove" equivalently.
As everyone knows or should know since Goedel 1931, it is possible to represent the *provability by the machine M* in the language of the machine (here the language is first order arithmetic, but the detail are irrelevant in this short explanation). I will write Bp for BEW(GN(p)), which is the representation of "M proves p" in the language of the machine. BEW(GN(p)), means really there is a number which codes a proof of the formula itself coded by GN(p). GN(p) is for the Godel number of p, which is the traditional encoding of p in arithmetic. BEW is for beweisbar: provable in German.
It has been proved by Hilbert, Bernays and Loeb that such a machine verifies the following condition, for any p representing an arithmetical proposition:
1) If M proves p then M proves Bp (sort of introspective ability: if the machine can prove p, the machine can prove that the machine can prove p)
2) M proves (Bp & B(p->q)) -> Bq (the machine can prove that if she proves p and if she proves p->q, then she will proves p). This means that the machine can prove that she follows the modus ponens inference rule (which is part of arithmetic). It is a second introspective ability.
3) The machine M proves "1)", i.e. M proves Bp -> BBp. i.e. the machine proves that if she proves p, then she can prove that she can prove p.
If the machine is sound, the machine is necessarily consistent. That means the machine will not prove 1 + 1 = 3, or any false arithmetical statement. To make things easier, I suppose there is a constant false in the machine language, written f. I could have use the proposition 1+1=3 instead, but "f" is shorter.
So M is consistent is equivalent as saying that (NOT Bf) is true about M, i.e. when B is the provability of the machine (as I will no more repeat).
Goedel second incompleteness theorem asserts that if M is consistent then M cannot prove M is consistent. This can be translate in the language of M: (NOT Bf) -> (NOT B (NOT Bf)).
We have two things: (NOT Bf) -> (NOT B (NOT Bf)) is true *about* the machine, but we have also that the machine is able to prove it about itself: that (NOT Bf) -> (NOT B (NOT Bf)) is can be proved by the machine.
Of course from this you know that (NOT Bf) is true about the machine, but that the machine cannot prove it.
I hope you are able to verify if a proposition of classical propositional logic is a tautology or not. In particular NOT A is has the same truth value that A -> f. In particular consistency (NOT Bf) is equivalent to (Bf -> f).
So if he machine is sound, and thus consistent, (Bf -> f) is true about the machine, but cannot be proved by the machine.
We see that in general, for any p, Bp -> p is true about the machine (it is the soundness of the machine), but the machine cannot prove it for any p, in particular she cannot prove Bf -> f, which is equivalent to its own consistency.
This means that the sentences (Bp & p) is equivalent to Bp, for any p, but the machine cannot prove this for any p.
So if we define a new sort of proof of p by Cp = (Bp & p), although exactly the same arithmetical p will be proved, the very logic of Cp will differ from the logic of Bp. (Bp <-> Cp) is true about the machine but not provable by the machine. Actually NOT B(Bp <-> Cp) and NOT C(Bp <-> Cp), are also true on the machine (and also not provable by the machine).
There is thus a big gap between the truth on the machine, and what the machine can prove about itself.
B can be used to modelise a form of scientist self-referential belief, or third person self-reference. It is a belief notion due to the fact that the machine cannot in general prove Bp -> p, like it is asked for a knowledge notion. But the machine can prove Cp -> p. (given that Cp is Bp & p).
Actually it can be shown that Cp obeys the traditional axioms for knowledge. C can be used to modelise a form of machine first person knowledge. With comp, it is arguably not even a "modelization" (but I will not develop this point here).
Important Remark: B can be translated in the language of the machine. It is a third person self-reference: the machine talk about itself through a third person description of itself (here = Goedel number): BEW(GN(p)). But C cannot be translated in the language of the machine. You will never find an equivalent of BEW for C. the simplest attempt would be Cp = BEW(GN(p)) & TRUE(GN(p)), but by Tarski theorem TRUE itself cannot be translated in the language of the machine. So the C logic is self-referential as far as the machine does not attempt to give a name or a third person description to the owner of the knowledge. The definition Cp = Bp & p is what I call the theatetus' definition of knowledge (given that it is the definition of knowledge.
It can be shown that Cp gives rise to a logic of subjective antisymmetrical time equivalent to Brouwer's theory of consciousness. It is the modal logic S4Grz
t: Cp -> p
k: Cp & C(p->q) -> Cq
4: Cp -> CCp
Grz: C(C(p->Cp)->p)->p (Grz is respondible for the antisymmetry of the modal accessibility relation among worlds, for those who remember what I said on Kripke semantics).
Modus Ponens: if the machine knows p, and knows "p->q", the machine knows q.
Necessitation: if the machine knows p, the machine knows that it knows p.
I hope it gives an idea why we can "meta-define" the knower (by linking it to the truth like Theaetetus), and explain why it is equivalent with the sound believer, although neither the believer nor the knower can be aware (believe or know) of that equivalence.
Cp is just one possible Theaetetical variant of the arithmetisable Bp.
I sum up and anticipate:
Bp = the scientist self-referential believer, (3-person)
Cp = the knower (the unnameable self-referential 1-person)
Dp = Bp & NOT (B NOT p) = the conjectural believer (Brian, this one is close to the "observer" I promise to explain)
Ep = Bp & NOT (B NOT p) & p = the conjectural knower (and this one too, it is almost the feeler or smeller ... :)
We have Bp <-> Cp <-> Dp <-> Ep is true on the machine, but none of those equivalence are either Believable, or knowable, ...
Now, if you restrict the arithmetical p on those p accessible by the Universal Dovetailer (= COMP!) : you get more:
you get: p <-> Bp <-> Cp <-> Dp <-> Ep, but again, by incompleteness, none of those equivalence are believable or knowable ... by the machine.
Now, with G and G* (*), all those logics split into a communicable and non communicable parts, and I could use them to capture the true sentences concerning the conjectural knower and the conjectural believer on the states accessible by the UD, and following the UDA they should give the logic of *measure one* on the consistent extensions of the machine, as conjecturally believe or known by them, and it is here that I got quantum logics (but that was not the main point here).
To sum up: Godel's theorems makes it possible to explain why any machine cannot know (1-person) that they are "machine" (obviously 3-person describable). Machine can never know they are machine, but can bet on it.
(*) G is the logic of Bp and NOT (B (NOT p)) as being provable by the machine.
G* is the logic of Bp and NOT (B (NOT p)) as being true about the machine.
It is the gap between G and G* which makes inescapable all those nuances between provability, knowability, probability 1, etc...
I recall: Smullyan's FOREVER UNDECIDED = excellent introduction to the logic G (the logic of Bp)
Boolos' THE LOGIC OF PROVABILITY = excellent treatise on the logic G and G* (called GL and GLS in that book).