Hi Terren Apology for commenting your post with some delay.

## Advertising

On 06 Jul 2011, at 19:54, Terren Suydam wrote:

Hey Bruno, Thanks for your comments... I'm a little clearer now on your stance on consciousness and intelligence, I think. I have a few more questions and concerns. Regarding consciousness, my biggest concern is that you're not really explaining consciousness, so much as describing it.

`Yes. That is true. I think computationalism can explain consciousness,`

`except for a remaining gap, but that it can explain why such gap is`

`unavoidable. So in a sense I do think that comp explains as completely`

`as possible consciousness. I will try to convey this in my further`

`commenting below. It does not really describe it, though, because the`

`explanation rule all description for it. See below.`

To be sure, the mathematical/logical framework you elucidate that captures aspects of 1st/3rd person distinctions is remarkable, and as far as I know, the first legitimate attempt to do so. But if we're talking TOE, then an explanation of consciousness is required.

`Right. But note that the notion of fist person experience already`

`involved consciousness, and that we are assuming comp, which at the`

`start assume that consciousness makes sense. The "explanation" per se`

`comes when we have understand that physics emerge from numbers, and`

`this in the double way imposed by the logic of self-reference. All`

`logics (well, not all, really) are splitted into two parts: the`

`provable and the non provable (by the machine into consideration).`

Using the descriptor Bp to signify a machine M's ability to prove p is fine. But it does not explain how it proves p.

`It proves p in the formal sense of the logician. "Bp" suppose a`

`translation of all p, of the modal language, in formula of arithmetic.`

`Then Bp is the translation of beweisbar('p'), that is provable(gödel`

`number of p). If the machine, for example, is a theorem prover for`

`Peano Arithmetic, "provable' is a purely arithmetical predicate. It is`

`define entirely in term of zero (0), the successor function (s), and`

`addition + multiplication, to gether with some part of classical`

`logic. It is not obvious at all this can been done, but it is "well`

`known" by logicians, and indeed that is done by Gödel in his`

`fundamental incompleteness 1931 paper.`

Ditto for the induction axioms;

Those are simply the scheme of axioms: [P(0) and for all n (P(n) -> P(s(n)))] -> for all n P(n) You cannot prove that [for all n and m, n+m = m+n] without it.

`Of course, such a machine talk only on numbers, so to define`

`provability *in* the language of the machine, you have to represent`

`the formula and the finite piece of proofs by numbers (like we have to`

`represent by strings in some language to communicate them);`

Löbian machines are mere descriptions, absent explanations of how a machine could be constructed that would have the ability to perform those operations.

`Those are very simple (for a computer scientist). I give this as`

`exercise to the most patient of my students.`

Taking the biological as an example, it is self-evident that we humans can talk about and evaluate our beliefs. But until we have an explanation for *how* we do that at some level below the psychological, we're still just dealing with descriptions, not explanations. Taking the abstract step towards logical frameworks helps in terms of precision, for sure. But as soon as you invoke descriptors like Bp there's an element of "and then the magic happens."

The machine lives in Platonia, so I give her as much time as they need.

`Let me give a simple example. The machine can prove/believe the`

`arithmetical laws, because those are axioms. They are sort of initial`

`instinctive belief.`

axiom 1: x+0 = x axiom 2: x + s(y) = s(x + y)

`Just from that the machine can prove that 1+1 = 2 (that is, the`

`addition of the successor of 0 with the successor of zero gives the`

`successor of the successor of 0:`

indeed:

`s(0) + s(0) = s(s(0) + 0) by axiom 2 (with x replaced by s(0) by`

`the logical substitution rule: the machine can do that)`

`but s(0) + 0 = s(0), by axiom 1 (again, it is easy to give to the`

`machine the ability to match a formula with an axiom)`

`so s(0) + s(0) = s(s(0)), by replacing s(0) + 0 with s(0) in the`

`preceding line.`

Amazingly enough, with just the mutiplication axiom: axiom 3: x * 0 = 0 axiom 4: x * s(y) = (x * y) + x

`you add already prove all the sigma_1 sentences, that is, the one`

`having the shape "it exists n such that P(n)", P(n) being decidable/`

`recursive. This is call sigma_1 completeness, and is equivalent with`

`Turing-universality. That is certainly amazing, but a bit of logic +`

`addition and multiplication gives already Turing universality.`

`This means also that the machine, without induction, is already a`

`universal dovetailer (once asked to dovetail on all what she can`

`prove). But such a machine is not Löbian: it still needs the infinity`

`of induction axioms. That infinity is recursively computable, so it`

`remains a machine!`

`And that machine is Löbian, which technically means that not only the`

`machine can prove all the true sigma_1 sentences, but she can prove`

`for each (fasle or true) sigma_1 sentences p that p -> Bp. In a`

`sense, a Löbian machine is a universal machine which knows (in that`

`technical sense) that she is universal.`

Believe me, I'm not expecting source code, so much as a clarification that we don't quite have a TOE yet.

`We have it. The "ontological TOE" (the ROE) is just elementary`

`arithmetic (without induction). Such a theory already emulates (in`

`"platonia") all machines, and this all the Löbian machines, which are`

`considered as the internal observers in arithmetic. Here we have to be`

`careful of not doing Searle's error, and to remember that by emulating`

`a machine, you don't become that machine! (in particular your brain`

`emulates you, but your brain is not you; the UD emulates all machines,`

`but is only one paricular, non universal, machines).`

Moving on, one technical question I have is how you get the basis for quanta/qualia distinction - namely the property of noncommunicability. Unfortunately I probably won't understand the answer as the Solovay logics are beyond me... but I hope to be able to understand how noncommunicability manifests as a logical property of a machine.

`It is consequence of what is called "the diagonalization lemma" (Gödel`

`1931).`

`It asserts that for each arithmetical predicate P (like being prime,`

`being the Gödel number of a theorem by the machine, etc.) you can find`

`a sentence k such that PA (say) will prove k <-> P(k).`

So for each predicate you can find a so-called fixed point. The k above.

`Now, take the predicate "provable", which Gödel has shown to be`

`definable in Peano Arithmetic (or principia mathematica, whatever),`

`that is, it is definable in the formal language of the machine under`

`consideration.`

`Now if P(n) is definable, then ~P(n) is also definable (= not P(n), if`

`P is definable, the negation of P is also definable).`

`So by the diagonalization lemma, you can find a sentence k such that`

`PA will prove:`

k <-> ~P(k)

`From this you can prove that if the machine is ideally correct, she`

`will never prove k. Indeed, if she proves k, she will prove ~P(k), and`

`so will lose self-referential correctness (and thus correctness). She`

`will prove k and she will proves that k is not provable.`

`To be sure, Gödel assumed only omega-consistency (weaker from`

`correctness), and Rosser extends the result for all simply consistent`

`machines. But I don't want to go into much details, and I do assume`

`the machines are correct, for other reasons.`

`But you see that k is true also. Indeed by k <-> ~P(k), k asserts its`

`own non provability, and k is indeed not provable. So k is an example`

`of true but non provable, or non communicable, sentence.`

`That is the first incompleteness result. It is not difficult to show a`

`concrete example of such a sentence k. Indeed ~Bf is such an example.`

`Self-consistency is incommunicable by the consistent machine. (It is`

`what I like to call a protagorean virtue). f if the constant false,`

`and t is constant true. Or you can take f = = '0 = s(0)', and t ==`

`'0=0'.`

`More difficult to prove, is the fact that if the machine believes also`

`in the induction axioms, then the machine can prove that IF she is`

`consistent, then she cannot prove that she is consistent:`

~Bf -> ~B~Bf or (if you see that ~Bf = Dt): Dt -> ~BDt; or again Dt -> DBf.

`Löb will find the maximal generalization of that sentence (B(Bp -> p) -`

`> Bp). With p = f, it should be easy to see that Löb generalizes`

`Gödel (hint: in classical propositional logic ~p is equivaent with p -`

`> f, so you need just to take p = f in Löb's formula).`

Another concern I have is that there seems to me a lot of imprecision in the language used to correlate the consequences of the Löbian machine with the folk-psychological terms we all use. For instance, I've seen you refer to Bp in separate contexts as M's ability to prove p, and as M "believing" proposition p.

`It is "belief" as used in cognitive science and epistemology. Not the`

`belief of religion. Although there are no differences, actually, but`

`that is a very hot debate. It is weird because that use of belief is`

`very common. It can only shock people who believe religiously (pseudo-`

`religiously) in the propositions of science. But we always start from`

`belief and get beliefs.`

That is confusing precisely because proof and belief are actually opposed in certain human-psychological contexts, such as belief in god. This concern extends to the language you invoke in your "discourse with Löbian machines" which I feel takes a lot of liberties with anthropomorphizing, and sneaks in a lot of folk-psychological concepts. Giving you the benefit of the doubt, I understand that evangelizing these ideas means being able to make non-technical analogies in the interest of accessibility. But it is also possible that in one context you mean Bp to mean "prove" and in another you mean Bp to "believe" in semantically non-identical ways,

`I try not. You can feel that the theorem will apply to you and to any`

`machine which`

`1) are machine (obvious for the machine, and it is equivalent to comp,`

`for the human)`

`2) believes in the elementary axioms of PA (so belief that x + 0 = x,`

`etc.).`

`3) are arithmetically correct (this is the "simplifying" assumption or`

`studying *that* class of machine, which is motivated by interviewing`

`correct machine to get the correct physical laws).`

and this lets you "cover more ground" in making the leap to the aspects of consciousness that we can analogize from. In other words, imprecise language may allow you to claim a more comprehensive mapping from Löbianity to psychology than is actually possible.

`It might be the case, but I don't think so. You might try to find a`

`specific example.`

I see more evidence of imprecision in your willingness to describe your salvia experiences as totally non-personal.

`To be sure I have published all my works in the 1988, except for the`

`dicovery of the arithmetical quantum logic, which I have published in`

`the nineties, and I have discovered salvia in 2008.`

The experience salvia are personal experiences.

`But they lead sometimes the experiencer to a total amnesia which makes`

`it feel as being a non personal experience.`

Now, I have no experience with salvia myself. However, the fact that such experience is available to you afterwards tells me that some aspect of your self is still present during the experience, regardless of how it feels.

Well, possibly so.

Contrast this with the experience of a baby, who actually has no psychological self yet, or an extremely rudimentary one, and tell me you are able to remember what it's like to be a baby.

`Some experience are described like that. you feel becoming a baby, or`

`you feel becoming what you have been before birth, or before the big`

`bang, or beyond. It is just a feeling, and is reported as such by the`

`experiencer. This is used for inspiration, or for doubting some`

`prejudices only. I was willing to believe that consciousness and time`

`was the construct of the third hypostases (Bp & p), but the salvia`

`experience makes me feel consciousness is more primitive than time,`

`indeed.`

`OK. I take the opportunity of the explanation above to explain what is`

`the (Bp & p) stuff, and clarify why consciousness, or first person`

`self-apprehension leads to a notion which is beyond word.`

`Gödel's incompleteness theorem asserts Dt -> ~BDt (consistent -> non`

`provable consistent). So Dt, that is ~Bf, is not provable. But ~Bf is`

`equivalent with Bf -> f. So, in general Bp -> p is not provable. So in`

`general Bp does not imply p, like a knowledge predicate or operator`

`should do. So it makes sense to define, like Theaetetus, Kp (the`

`knowledge of p) by Bp & p (knowledge = true (justified) belief). Of`

`course we have Kp -> p (trivially given that Kp is Bp & p, and from a`

`& b you can deduce b). Indeed Kp, defined in this way does follows the`

`usual axiom of knowledge (even temporal knowledge) theories.`

`So you see that incompleteness justifies the working of the classical`

`theory of knowledge for the machines.`

`Even more interesting is that Bp & p leads to an operator which is not`

`definable in the language of the machine, and this explains a lot of`

`confusion in philosophy and theology, including why consciousness`

`cannot be defined (only lived). The 1-I (captured by the Bp & p) has`

`no name from the point of view of the machine.`

`You might try to define it like (Bp & Tp), with Tp put for an`

`arithmetical truth predicate. But such a predicate cannot exist.`

`Indeed, if it exists, then you can find a k, by applying again the`

`diagonalization lemma of Gödel on ~V(n), so that PA would prove p <->`

`~Vp, and from this you can proof that PA is inconsistent. So already`

`Truth is not definable by the machine (although she can define many`

`useful approximations). Similarly, it can be proved that no notion of`

`knowledge by a machine can be defined by the machine. Classical`

`(Theaetetical) knowledge is already like consciousness: we can' define`

`it. But again, we can define the knowledge of simpler (than us)`

`machine, derived the theology, and lift it on us, in a betting way, at`

`our own risk and peril. We do that when we say "yes" to the doctor: it`

`*is* a theological act, and people have the necessary right to say "no".`

`Now, we can study Bp & p logic at the modal level, and so can the`

`machines too. This is a trick which makes us possible to bypass our's`

`or the machine's limitations.`

`The (Bp & p) hypostase (the first person point of view) has many of`

`the feature of the "universal soul" of Plotinus (the greek mystical`

`inner God). The machine lives it, but cannot give a name to it. It`

`answers Ramana Maharsi koan "Who am I?". The Lôbian machine's answer`

`is "I don't know, but I can explain why I *cannot* know that in case I`

`(my third person 3-I, or body) is a machine".`

`To get the logic of measure one in UD multiplication, Bp & p is not`

`enough, we need a weakening and a strengthening which are given by Bp`

`& Dt, and Bp & Dt & p.`

`You might take a look on the Plotinus paper, but to be honest, it`

`requires familiarity in logic.`

My final concern, as I've tried to elaborate on previously, is your willingness to posit consciousness as a property of a (virgin) universal machine. For me this is pretty counter-intuitive

`For me too. That is why I have already written 8 diaries from the`

`salvia experience. I see it, but can't believe it :)`

`It is very counter-intuitive. And I can't dismiss the experience as a`

`mere hallucination, because it is the very existence of that`

`hallucination which is counter-intuitive.`

(which is saying something because I'm with you on the UDA!).

Wow. I am very glad to hear that.

It means my computer is conscious in some form, regardless of (or in spite of) the program it is running. And that for me leads to a notion of consciousness that is extremely weak. It is why I compared it to panpsychism previously, because panpsychism similarly attributes consciousness to aspects of reality (assuming MAT) that lead to an extremely weak form of consciousness that deprives it of any explanatory potential. In your case at least it is possible in principle to explain what it is about a universal machine that gives rise to consciousness (and that, without any recourse to Löbianity or anything beyond universality).

`When I read salvia reports, I was quite skeptical. I don't like the`

`idea that the non Löbian machine is already conscious. But then the`

`math are OK. Such machine lacks only the ability to reflect on the`

`fact. They believe t, Bt, BBt, BBBt, etc. but they cannot believe Bp -`

`> BBp. So they have a far simpler notion of themselves, and they lack`

`the full self-introspective self-awareness of the machines having the`

`induction axioms. Note also that although non löbian universal machine`

`are in principle very simple, they are still far from trivial.`

I realize you're not saying for certain that universal machines are conscious, and that this is somewhat informed by your salvia experiences. But for where my head is right now, consciousness ought to be explainable in terms of some kind of cybernetic organization that goes well beyond "mere" universality.

`Good idea to put "mere" in quote. It is just a flabbergasted fact that`

`addition and multiplication already leads to Turing universality, and`

`that is not trivial at all to prove. But afterwards it shows that`

`universality is cheap. It explains why nature recurrently build`

`universal structure.`

In my view of things, bacteria and viruses are not conscious because they lack a nervous system that would satisfy the cybernetic organization I have in mind. I am interested in your proof they are universal, btw.

`We agree, I think. All universal machine have a sophisticate, yet`

`sometimes hidden in a subtle apparent simplicity, cybernetic`

`organization. Bacteria have very complex series of regulator genes,`

`which make it possible to program them for addition and multiplication`

`(or simpler, but still universal tasks). Viruses too, at least in`

`combination with their hosts.`

`I think also that an eukaryotic cells are already the result of a`

`little bacteria colony, so that we are swarms of bacteria, somehow.`

`The cybernetic organization does not need neurons, it can use genes`

`and "meta-genes" (genes regulating the action of other genes). In fact`

`a bacteria like E. Coli, is an incredibly complex structure, with very`

`subtle self-regulating actions.`

I'm also wondering if you have an english-language explanation of the MGA... I recall seeing one a long time ago.

Try with this: http://old.nabble.com/MGA-1-td20566948.html Let me now if you have a problem.

Apologies for the length of this response!

You are welcome. Bruno http://iridia.ulb.ac.be/~marchal/ -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.