On 15 Jul 2011, at 19:54, Terren Suydam wrote:

Hi Bruno,

Roughly speaking, my main struggle with your wonderful arguments is
making the leap from the domain of mathematical logic to the one and
only domain we can be sure of as conscious, namely biological human
consciousness, and this without rejecting comp. Unfortunately I am
hindered by my lack of fluency in mathematical logic. See below for
comments.

On Tue, Jul 12, 2011 at 11:17 AM, Bruno Marchal <marc...@ulb.ac.be> wrote:
Hi Terren

Apology  for commenting your post with some delay.

No worries about the delay. I play email chess and have had games over
a year old, so I am used to being patient :-]

<snip>
To be sure, the
mathematical/logical framework you elucidate that captures aspects of
1st/3rd person distinctions is remarkable, and as far as I know, the
first legitimate attempt to do so. But if we're talking TOE, then an
explanation of consciousness is required.

Right. But note that the notion of fist person experience already involved consciousness, and that we are assuming comp, which at the start assume that consciousness makes sense. The "explanation" per se comes when we have understand that physics emerge from numbers, and this in the double way imposed by the logic of self-reference. All logics (well, not all, really) are splitted into two parts: the provable and the non provable (by the
machine into consideration).

I think the explanation of how physics emerges from the "number
theology" as you put it is a great contribution and certainly *part*
of an explanation of consciousness, especially in that it reduces the
mind/body problem to computer science, as you say.

But it is not enough to "merely" deal with the mind/body problem. The
hard problem of how qualia arise needs to be explained.


I think that the original mind-body problem is, or at least includes the "hard problem". The hard aspect has been intentionally dismissed by the behaviorist and positivist schools (like with the Vienna circle). In the frame of comp, that is what AUDA should explain, and what UDA formulates.




I know you
have identified a logical framework that is capable of distinguishing
qualia and quanta from the point of view of the lobian machine, but
again, that strikes me as a description, not an explanation.

The explanation comes from the fact that such a distinction is made necessary. Machines encounters necessarily the "hard" mind body problem, by the logic of self-reference. WE know, by having build a simple correct machine that Bp and Bp & p are equivalent (proves the same arithmetical propositions), but the machine cannot know that (by the logic), and the two points of view are not conciliable.




Another way to put it perhaps is that such a logical framework may
well be a *necessary* condition of a machine that can experience
qualia, but not a sufficient one.

If it is not sufficient, I am not sure it makes sense to accept the "doctor" proposition.




An example of a hypothesis that
takes this further towards an explanation is that an experiencing
machine needs to be embodied (a closed system) in some context (even
if in platonia) with a boundary that can be perturbed as a result of
that embodiment (i.e. what we think of as a sensory apparatus);

But this is automatically taken into account. You expect that the doctor does not just copy your brain, but that it reconstitute it relatively to your (most probable) environment. A brain has many inputs (from eyes, and from the cerebral stem). And the measure problem comes from the fact that the UD does also reconstitute you in many environments.

Note also that the argument (both UDA and AUDA) does not necessitate that consciousness supervenes only on the biological brain, the "generalized brain" might include the environment, even the whole physical reality. That appears at the step seven, where you can eliminate the neurophysiological hypothesis (used only in steps 1-6 for pedagogical purpose).




and
that the machine synthesize these perturbations within the context of
a recursively updated model of "the world", grounded in the patterns
generated by those perturbations, and this model is the content of its
experience. Once the machine develops a model its world sophisticated
enough to include itself, it perhaps achieves Lobianity, although my
grasp of mathematical logic is too limited to say, unfortunately.

Löbianity is very cheap. Peano Arithmetic has an implicit "model" of itself at the start. This is due to the fact that "provable" is an arithmetical predicate. Of course a complex and deep Löbian machine will have a far more sophisticate self-representation, but this will not change the logic of self-reference (as far as the machine is correct about that self- representation).




This hypothesis is what I happen to believe, but I'm not attempting to
argue for it or defend it here (if I were, I'd include much more
detail!)  My point here is only that I think there's an explanatory
gap that is possible to bridge, but that the self-references logics
that give rise to incommunicable beliefs don't bridge that gap....
more on this later.

The solution of the hard problem is that the machine makes the experience of the gap, and can explain why the gap is not bridgeable. The explanation is that there is necessarily, from the machine's points of view, a real non bridgeable gap.





Using the descriptor Bp to signify a machine M's ability to prove p is
fine. But it does not explain how it proves p.

It proves p in the formal sense of the logician. "Bp" suppose a translation of all p, of the modal language, in formula of arithmetic. Then Bp is the translation of beweisbar('p'), that is provable(gödel number of p). If the machine, for example, is a theorem prover for Peano Arithmetic, "provable' is a purely arithmetical predicate. It is define entirely in term of zero (0), the successor function (s), and addition + multiplication, to gether with some part of classical logic. It is not obvious at all this can been done, but it is "well known" by logicians, and indeed that is done by Gödel
in his fundamental incompleteness 1931 paper.

When you say "if the machine is a theorem prover", are you referring
to a trivial machine? Something you can assign to your students?

Yes. I think that I did come back on this below. Now, the notion of triviality is relative, and starting from a simple theory like PA, you need to be Gödel to find it in the theory itself. That is a major discovery in science. But if the student are familiar with the notion of interpreter, and with a bit of logic programming, it becomes, starting from a relatively high level programming language, a tedious exercise.



If yes, then I struggle to see how we can relate such a machine to the
consciousness we have access to (our own), see below.

OK. It is not an easy point.


If no, then I
struggle to see how invoking a 'theorem prover' is not a "and then the
magic happens" leap of faith.

<snip>
Löbian machines are mere descriptions, absent
explanations of how a machine could be constructed that would have the
ability to perform those operations.

Those are very simple (for a computer scientist). I give this as exercise to
the most patient of my students.

Then as above, I struggle to see how we can interpret the biological
machines we are familiar with (namely, us) in terms of Löbian logic.
Is human language an adequate substitute for the precise logical
domain of arithmetic and Gödelian numbering of propositions?  Natural
language is so messy and imprecise, but I may be missing the point.

In natural language, we confuse all modalities. We confuse easily ~Bx with B~x (cf the confusion between atheism and agnosticism). Lucas and Penrose, on Gödel, confuse Bp and Bp & p, and that confusion appears also in all easy explanation of the mind-body problem. People often confuse Bp and Dp, or tend to believe that Bp -> p, or that Bp -> Dp, which is indeed the case for most modal logics studied before the discovery of Löb's theorem (B(Bp -> p) -> Bp). I recall that B is put for Gödel's beweisbar, and p is any arithmetical proposition. So modal logic helps a lot, in both philosophy and math (provability/ consistency logics).





Taking the biological as an
example, it is self-evident that we humans can talk about and evaluate our beliefs. But until we have an explanation for *how* we do that at
some level below the psychological, we're still just dealing with
descriptions, not explanations. Taking the abstract step towards
logical frameworks helps in terms of precision, for sure. But as soon as you invoke descriptors like Bp there's an element of "and then the
magic happens."

The machine lives in Platonia, so I give her as much time as they need. Let me give a simple example. The machine can prove/believe the arithmetical laws, because those are axioms. They are sort of initial instinctive belief.

axiom 1:   x+0 = x
axiom 2:   x + s(y) = s(x + y)

Just from that the machine can prove that 1+1 = 2 (that is, the addition of the successor of 0 with the successor of zero gives the successor of the
successor of 0:

indeed:

s(0) + s(0) = s(s(0) + 0) by axiom 2 (with x replaced by s(0) by the
logical substitution rule: the machine can do that)
but s(0) + 0 = s(0), by axiom 1 (again, it is easy to give to the machine
the ability to match a formula with an axiom)
so s(0) + s(0) = s(s(0)), by replacing s(0) + 0 with s(0) in the preceding
line.

Amazingly enough, with just the mutiplication axiom:

axiom 3: x * 0 = 0
axiom 4: x * s(y) = (x * y) + x

you add already prove all the sigma_1 sentences, that is, the one having the shape "it exists n such that P(n)", P(n) being decidable/recursive. This is call sigma_1 completeness, and is equivalent with Turing- universality. That is certainly amazing, but a bit of logic + addition and multiplication gives
already Turing universality.

This means also that the machine, without induction, is already a universal dovetailer (once asked to dovetail on all what she can prove). But such a machine is not Löbian: it still needs the infinity of induction axioms. That
infinity is recursively computable, so it remains a machine!
And that machine is Löbian, which technically means that not only the
machine can prove all the true sigma_1 sentences, but she can prove for each (fasle or true) sigma_1 sentences p that p -> Bp. In a sense, a Löbian machine is a universal machine which knows (in that technical sense) that
she is universal.

But I am not a dovetailer.

You are right. My fault: the "that" in the last paragraph refers to the universal machine + the induction axiom (and I assume the universal machine is presented in a logic system, or I take the least first order logical specification of the universal machine (a priori a universal machine is not an axiomatic system, but it can easily be transformed into one).

The universal dovetailer itself is not even a universal machine, given that it has no inputs, nor outputs. But this is dependent on the definition chosen for "universal machine".




Does a machine in your framework need to
dovetail on what it can prove for us to explain how it gets access to
its beliefs?

It does not need that. Usually machines and observers are not conceive as dovetailers, except when they do explicit exploration for searching a proof.



If no, do you think it is important to explain how
biological machines like us do have access to our beliefs?

That is crucial indeed. But this is exactly what Gödel did solve. A simple arithmetical prover has access to its belief, because the laws of addition and multiplication can define the prover itself. That definition (the "Bp") can be implicit or explicit, and, like a patient in front of the description of the brain, the machine cannot recognize itself in that description, yet the access is there, by virtue of its build in ability. The machine itself only identifies itself with the Bp & p, and so, will not been able to ever acknowledge the identity between Bp and Bp & p. That identity belongs to G* minus G. The machine will have to bet on it (to say "yes" to the doctor).




 If the
answer to that is no, are you just taking it on faith that assuming
comp, any machine that can access its own beliefs is in implementation
of a Löbian machine?

Not at all. It is a theorem. All self-referentially correct machine are Löbian, once she is universal and can prove the induction axioms. All recursively enumerable extensions of Peano arithmetic, or of equivalent theories, are Löbian.



Maybe this is easy for you to prove, I may be
missing that as well.

It is not easy, but a minimal amount of familiarity with mathematical logic makes it rather easy. It follows from standard proofs of Gödel's theorem.



Do you have an explanation for how Löbian self-reference occurs in
biological machines? Is natural language required?

I don't think natural language is required. On the contrary, I would say that natural language will usually entails a departure from Löbianity, due to the confusion described above. The humans, and any "embodied in complex reality" machines, will usually have a non Löbian supplementary layer to handle "beliefs revision". We don't need that to solve the mind-body problem, which is better handled with ideal machine in empty environment (closed eyes meditation!). The non monotonic supplementary layers is of course the crucial ingredient for having a machine capable to go through any form of concrete life struggle. But that's AI, not fundamental cognitive/physical science.





Believe me, I'm not expecting source code, so much as
a clarification that we don't quite have a TOE yet.

We have it. The "ontological TOE" (the ROE) is just elementary arithmetic (without induction). Such a theory already emulates (in "platonia") all machines, and this all the Löbian machines, which are considered as the internal observers in arithmetic. Here we have to be careful of not doing Searle's error, and to remember that by emulating a machine, you don't become that machine! (in particular your brain emulates you, but your brain is not you; the UD emulates all machines, but is only one paricular, non
universal, machines).

I agree in the big picture, but I'm not sure you can say the TOE is
complete without some more explanation.

It is! In the sense of not necessitating any other axioms (than elementary arithmetic, *without* induction). Then you need only *definitions* to proceed. The reality = arithmetic without induction. The observer = arithmetic with induction. The first emulate the second, and the physics (and other modalities) are extracted from the interview of the second, when emulated by the first.



What does ROE stand for?

Realm of Everything. It is the ontological part of the TOE. It is what we take as existing or true independently of ourselves.






Moving on, one technical question I have is how you get the basis for quanta/qualia distinction - namely the property of noncommunicability.
Unfortunately I probably won't understand the answer as the Solovay
logics are beyond me... but I hope to be able to understand how
noncommunicability manifests as a logical property of a machine.

It is consequence of what is called "the diagonalization lemma" (Gödel
1931).

It asserts that for each arithmetical predicate P (like being prime, being the Gödel number of a theorem by the machine, etc.) you can find a sentence
k such that PA (say) will prove k <-> P(k).

So for each predicate you can find a so-called fixed point. The k above.

Now, take the predicate "provable", which Gödel has shown to be definable in
Peano Arithmetic (or principia mathematica, whatever), that is, it is
definable in the formal language of the machine under consideration.
Now if P(n) is definable, then ~P(n) is also definable (= not P(n), if P is
definable, the negation of P is also definable).

So by the diagonalization lemma, you can find a sentence k such that PA will
prove:

k <-> ~P(k)

From this you can prove that if the machine is ideally correct, she will never prove k. Indeed, if she proves k, she will prove ~P(k), and so will lose self-referential correctness (and thus correctness). She will prove k
and she will proves that k is not provable.

To be sure, Gödel assumed only omega-consistency (weaker from correctness), and Rosser extends the result for all simply consistent machines. But I
don't want to go into much details, and I do assume the machines are
correct, for other reasons.

But you see that k is true also. Indeed by k <-> ~P(k), k asserts its own non provability, and k is indeed not provable. So k is an example of true
but non provable, or non communicable, sentence.

That is the first incompleteness result. It is not difficult to show a
concrete example of such a sentence k. Indeed ~Bf is such an example.
Self-consistency is incommunicable by the consistent machine. (It is what I
like to call a protagorean virtue). f if the constant false, and t is
constant true. Or you can take f = = '0 = s(0)', and t == '0=0'.

thanks, I understand how you derive 'incommunicable' now, as the set
of propositions that are true but not provable (as in Gödel's
theorem).

OK.





More difficult to prove, is the fact that if the machine believes also in the induction axioms, then the machine can prove that IF she is consistent,
then she cannot prove that she is consistent:

~Bf -> ~B~Bf

or (if you see that ~Bf = Dt):

Dt -> ~BDt; or again Dt -> DBf.

Löb will find the maximal generalization of that sentence (B(Bp -> p) -> Bp). With p = f, it should be easy to see that Löb generalizes Gödel (hint: in classical propositional logic ~p is equivaent with p -> f, so you need
just to take p = f in Löb's formula).

So a machine that holds no contradictory beliefs cannot prove to
another that it never contradicts itself... interesting.

Indeed. It is the key, + the fact that the machine can prove that very fact, once she take for granted some description of itself (like the one given by the doctor).





Another concern I have is that there seems to me a lot of imprecision
in the language used to correlate the consequences of the Löbian
machine with the folk-psychological terms we all use. For instance,
I've seen you refer to Bp in separate contexts as M's ability to prove
p, and as M "believing" proposition p.

It is "belief" as used in cognitive science and epistemology. Not the belief of religion. Although there are no differences, actually, but that is a very hot debate. It is weird because that use of belief is very common. It can
only shock people who believe religiously (pseudo-religiously) in the
propositions of science. But we always start from belief and get beliefs.

Here you are using 'belief' in a way that is counter-intuitive in the
ordinary sense of the word.

But this is weird. I really use "belief" like in the belief theory. Like in "do you believe that it will rain today?", or like in "do you believe that Obama is the president of the US?". It seems to me that I heard that use all the times when seeing a movie. The religious notion of belief is used only in the religious context, but the word belief is much more wider than that. A confusion comes from the fact that people believes (!) that science = knowledge. But science is only belief. The main difference with knowledge is that for knowledge we have Bp -> p (and B(Bp -> p)). For belief we don't have Bp -> p or we don't have B(Bp -> p). For machine: the situation is clear: G* proves Bp -> p (trivially in the sense that we work on correct machines, like PA or ZF), but G does not prove it (the machine does not believe it, or does not prove it). The machine cannot know that she is correct. By Löb's theorem, the machine knows that only on the proposition she can actually prove. What is amazing, and is the core of Gödel discovery, is that proving acts like believing, and not like knowing, for the correct machine. That makes the correct machine maximally humble and modest.




What you are saying suggests that "all
primes are odd" has the same epistemological status as "God does not
exist", or less controversially, "I am consistent". I hope we agree
that these are different kinds of beliefs, the primary distinction
involving provability. This is why invoking Bp in some contexts as
'provable' and in others as 'belief' is confusing.


It is the belief of the perfect (self-referentially correct machine) when talking about a third person presentation of itself. Of course it is the scientific third person self-reference. The itself is the 3-I, or the body, or a description of the body.

Now, the epistemology is not in the proposition. So it makes no sense to argue of the nature of the three propositions; because their epistemological status will depend on the machine that you interview, or of the theory that you are using.

For example, if you take the theory PA + "God exists" (a ridiculous theory just for making my point), then "all the primes are odd" and "God does not exist" have the same status (refutable). "I am consistent" is true and not provable, nor refutable, in that theory. The epistemology is in the machine/theory, not in any proposition (I suspect you have some implicit theory in the background; you should not).

if you define like me (and Plato) God by Truth. Then the proposition "God does not exist" is no more expressible in the language of (any) machines. Weakening of it will be accessible under the form of a bet or guess.





That is confusing precisely
because proof and belief are actually opposed in certain
human-psychological contexts, such as belief in god. This concern
extends to the language you invoke in your "discourse with Löbian
machines" which I feel takes a lot of liberties with
anthropomorphizing, and sneaks in a lot of folk-psychological
concepts. Giving you the benefit of the doubt, I understand that
evangelizing these ideas means being able to make non-technical
analogies in the interest of accessibility. But it is also possible
that in one context you mean Bp to mean "prove" and in another you
mean Bp to "believe" in semantically non-identical ways,

I try not. You can feel that the theorem will apply to you and to any
machine which
1) are machine (obvious for the machine, and it is equivalent to comp, for
the human)
2) believes in the elementary axioms of PA (so belief that x + 0 = x, etc.). 3) are arithmetically correct (this is the "simplifying" assumption or studying *that* class of machine, which is motivated by interviewing correct
machine to get the correct physical laws).

I think this gets to the core of my issues. I think we can agree that
humans that have never done any arithmetic in their lives are still
conscious (e.g. http://en.wikipedia.org/wiki/Pirah%C3%A3_people). So
(2) and (3) do not apply to humans.

Well, here I disagree. I have worked with strongly mentally disabled people during two years. They were unable to count and most of them could not even talk. With the help of computers I have been able to convince external observers that they were only handicapped, and that they were able to count, add an multiply, ... and to do induction. I don't think it exists humans for which "2)" and "3)" does not apply, even if the task for motivating them, and helping them to express themselves, can be insuperable. I am not convinced at all that the Piraha people escapes "2)" and "3)". It seems clear that they are just not interested, like they are not interested in canoe, nor in anything capable of changing their life, and it is their right. But they are Löbian. Actually I tend to believe that octopus and spider, and all vertebrates are Löbian. Löbianity concerns believability, not actual beliefs. Still less the ability to use or express such beliefs. And also, Löbianity is needed only for self-consciousness, but universality is enough for consciousness, I begin to think. More on this below.



and this lets
you "cover more ground" in making the leap to the aspects of
consciousness that we can analogize from. In other words, imprecise
language may allow you to claim a more comprehensive mapping from
Löbianity to psychology than is actually possible.

It might be the case, but I don't think so. You might try to find a specific
example.

OK, beyond the Bp confusion (if only in my head), another example is
making the leap from identifying a logical domain of propositions that
are true but not provable to our experience of qualia. While it is
certainly true that qualia can be considered true propositions (from
machine's 1p) that are not communicable (provable in 3p), it is not
obviously true that all such incommunicable propositions represent
qualia. Yet the AUDA routinely makes these kinds of leaps.

The true and not provable sentences are given by G* minus G, and they does NOT represent the qualia. For the qualia, I am using the classical theory of Theaetetus, and its variants. So I define new logical operator, by Bp & p, Bp & Dt, Bp & Dt & p. The qualia appears with Bp & p (but amazingly enough those qualia are communicable, at least between Löbian entities). The usual qualia (red, yellow, pain, pleasure) appears in the non communicable part of the logic with the operator defined by Bp & Dt & p. Bp makes it UD accessible, Dt makes it belonging to a "reality" (a model, a maximal extension of a computation) and "p" makes it true. The logic we get is close to the "quantum logic" of field perception (but works remains to assess this, and evaluate such logics). Note that the motivation of such classical knowledge theory in AUDA are given in the UDA. Note also that I interview computationalist machines (not just correct one), and this is formalized by restricting the atomic arithmetical propositions to the sigma_1 sentences (having the shape ExP(x) with P decidable).





As humans, we are epistemologically bound to consider abstract
arguments such as AUDA in the context of our experience. It is too
easy for us, in other words, to make those leaps with you in a
non-critical way, because we are already leaping just to comprehend
the argument. This is why the lack of precision concerns me, because
intuitively I feel that those leaps need more scaffolding, so to
speak.

The scaffolding is given by the classical theory of knowledge, that the self-referentially correct machine is bound to find by itself when introspecting herself. It leads to 8 (natural) hypostases, although in reality it is 4 + 4*infinity (indeed, the weakening like "BBBp & DDt & p" plays a role too, and seems to be necessary for some belief in some notion of space, but again this is under development, well sleepy- development, since a time.





I see more evidence of imprecision in your willingness to describe
your salvia experiences as totally non-personal.

To be sure I have published all my works in the 1988, except for the
dicovery of the arithmetical quantum logic, which I have published in the
nineties, and I have discovered salvia in 2008.
The experience salvia are personal experiences.
But they lead sometimes the experiencer to a total amnesia which makes it
feel as being a non personal experience.

OK, you are saying you (sometimes) have no memories of what happened?
They are completely inaccessible to to you?

Not really. When lucky, I can have a good memory of what happened. When the memory comes back, I do remember that I was lacking my memories, retrospectively.



It sounds more like you are saying you have zero self-awareness.

You can say that. Zero self-awareness, or even zero-self- consciousness, but yet: maximal awareness, or maximal consciousness. Memories and the self seems to make you less conscious (paradoxically).


If
that's the case, that does not mean that your (constructed) self is
gone, necessarily, only that you are not aware of it in the ongoing
experience.

OK.



Now, I have no
experience with salvia myself. However, the fact that such experience is available to you afterwards tells me that some aspect of your self
is still present during the experience, regardless of how it feels.

Well, possibly so.



Contrast this with the experience of a baby, who actually has no
psychological self yet, or an extremely rudimentary one, and tell me
you are able to remember what it's like to be a baby.

Some experience are described like that. you feel becoming a baby, or you feel becoming what you have been before birth, or before the big bang, or beyond. It is just a feeling, and is reported as such by the experiencer. This is used for inspiration, or for doubting some prejudices only. I was willing to believe that consciousness and time was the construct of the
third hypostases (Bp & p), but the salvia experience makes me feel
consciousness is more primitive than time, indeed.

So long as one can remain skeptical about the results of such
inspirations, I think such voyages away from our ordinary
consciousness can be extremely valuable. We can never forget how easy
it is to delude ourselves about what we feel, sober or not.

Yes. Those interested in consciousness are lucky that something like salvia exists. It looks not toxic at all (even beneficious) and it can lead to a short but quite interesting change of consciousness, which is repeatable, and with an experience which is shared by many people who are patient enough with the plant. Like with sharable experiment, you learn only through it, by refuting or doubting previous prejudices.



Arguments
made from introspection are always suspect.

They are 100% useless in the scientific endeavor. But like consciousness, they can be the object of the scientific endeavor when we tackle the mind body problem. Here, a lot of people confuse those things. They understand that first person experiences are not scientific (third person communicable), and so they induce that we cannot talk *about* such experiences in any third person way. Of course that is non a valid deduction, and it is a confusion of category. In science we can talk about anything once we make our theory clear enough. The idea that science cannot *address* some question is obscurantism.





OK. I take the opportunity of the explanation above to explain what is the
(Bp & p) stuff, and clarify why consciousness, or first person
self-apprehension leads to a notion which is beyond word.

Gödel's incompleteness theorem asserts Dt -> ~BDt (consistent -> non
provable consistent). So Dt, that is ~Bf, is not provable. But ~Bf is
equivalent with Bf -> f. So, in general Bp -> p is not provable. So in general Bp does not imply p, like a knowledge predicate or operator should do. So it makes sense to define, like Theaetetus, Kp (the knowledge of p) by Bp & p (knowledge = true (justified) belief). Of course we have Kp - > p (trivially given that Kp is Bp & p, and from a & b you can deduce b). Indeed Kp, defined in this way does follows the usual axiom of knowledge (even
temporal knowledge) theories.
So you see that incompleteness justifies the working of the classical theory
of knowledge for the machines.

Even more interesting is that Bp & p leads to an operator which is not
definable in the language of the machine, and this explains a lot of
confusion in philosophy and theology, including why consciousness cannot be defined (only lived). The 1-I (captured by the Bp & p) has no name from the
point of view of the machine.
You might try to define it like (Bp & Tp), with Tp put for an arithmetical truth predicate. But such a predicate cannot exist. Indeed, if it exists, then you can find a k, by applying again the diagonalization lemma of Gödel on ~V(n), so that PA would prove p <-> ~Vp, and from this you can proof that
PA is inconsistent. So already Truth is not definable by the machine
(although she can define many useful approximations). Similarly, it can be
proved that no notion of knowledge by a machine can be defined by the
machine. Classical (Theaetetical) knowledge is already like consciousness: we can' define it. But again, we can define the knowledge of simpler (than us) machine, derived the theology, and lift it on us, in a betting way, at our own risk and peril. We do that when we say "yes" to the doctor: it *is*
a theological act, and people have the necessary right to say "no".

Now, we can study Bp & p logic at the modal level, and so can the machines
too. This is a trick which makes us possible to bypass our's or the
machine's limitations.

The (Bp & p) hypostase (the first person point of view) has many of the feature of the "universal soul" of Plotinus (the greek mystical inner God).
The machine lives it, but cannot give a name to it. It answers Ramana
Maharsi koan "Who am I?". The Lôbian machine's answer is "I don't know, but I can explain why I *cannot* know that in case I (my third person 3- I, or
body) is a machine".

To get the logic of measure one in UD multiplication, Bp & p is not enough, we need a weakening and a strengthening which are given by Bp & Dt, and Bp &
Dt & p.

You might take a look on the Plotinus paper, but to be honest, it requires
familiarity in logic.

I can give it a shot, do you have a link?

It is on my front page of my URL. Click on the little "pdf" near the title of the Plotinus paper, or just click here:

http://iridia.ulb.ac.be/~marchal/publications/CiE2007/SIENA.pdf




My final concern, as I've tried to elaborate on previously, is your
willingness to posit consciousness as a property of a (virgin)
universal machine. For me this is pretty counter-intuitive

For me too. That is why I have already written 8 diaries from the salvia
experience. I see it, but can't believe it :)
It is very counter-intuitive. And I can't dismiss the experience as a mere hallucination, because it is the very existence of that hallucination which
is counter-intuitive.

Why is the existence of the hallucination counter-intuitive?

Because it is an hallucination of a de-hallucination. With most hallucinogen, you feel like dreaming or hallucinating, with some range of lucidity. With salvia you loose completely lucidity, and feel the experience as being realer than what you feel usually, and you feel like awakening from an hallucination (your life) and being, at last, really awake.

It is an hallucination that your life was an hallucination. With high dose, you feel like your life is a vague dream and you forget it like we usually forget dreams. With low dose, you keep the memory, but you get disconnected from it. You feel your life as a dream, but not even a personal dream, you can feel it as not belonging to you: you are someone else, not even related to anything you knew.

To be honest, the experience can have other very astonishing feature, and not all of them are easy to conciliate with comp, although that might be possible (but then it is even more astonishing).

Another utterly counterintuitive aspect of the experience, is that, you can feel to be conscious, yet you don't feel time going on, and you can even forget what time (and space) are. before salvia, I was linking consciousness and (subjective) time. I was thinking that all qualia (like seeing red) was embedded in a time-like sensation. Even now, I cannot imagine giving sense to any qualia, with some subjectivity of time. With salvia people can hallucinate that time disappear. You can be eternal for happens to be later a short instant! It gives the mystical immortality apprehension, where immortality is not some hope in some afterlife, but the living of eternity ... in the past. You get the feeling you know that you are immortal, because you have lived it. That's paradoxical and counterintuitive at the most. Coming back from there, I am tempted to dismiss this as insanity (type Bf, as it is most plausibly), but if I do the experience again, it is (again) felt as the most obvious fact of life.

The hallucination existence is counter-intuitive because it seems to imply that our consciousness is statical, and that the time is a complex product of the brain activity (or of the existence of some number relation). I thought that consciousness needs the illusion of time, but salvia makes possible an hallucination which is out of time. How could we hallucinate that? I see only one solution, we are conscious even before we build our notion of time. Mathematically, with comp, this invites us to consider that consciousness begins with universality, even the statical one "living" in Platonia.




(which is
saying something because I'm with you on the UDA!).

Wow. I am very glad to hear that.

I had already come to an intuitive sense of the UDA before I
encountered your arguments, so I had already experienced that
"metaphysical vertigo" you warn about :-]  Then to see that you had
actually mathematically formalized that intuition, I was pretty blown
away by that.  The AUDA arguments are all new to me and that is what
I'm struggling with.


It means my
computer is conscious in some form, regardless of (or in spite of) the
program it is running. And that for me leads to a notion of
consciousness that is extremely weak. It is why I compared it to
panpsychism previously, because panpsychism similarly attributes
consciousness to aspects of reality (assuming MAT) that lead to an
extremely weak form of consciousness that deprives it of any
explanatory potential. In your case at least it is possible in
principle to explain what it is about a universal machine that gives
rise to consciousness (and that, without any recourse to Löbianity or
anything beyond universality).

When I read salvia reports, I was quite skeptical. I don't like the idea that the non Löbian machine is already conscious. But then the math are OK. Such machine lacks only the ability to reflect on the fact. They believe t, Bt, BBt, BBBt, etc. but they cannot believe Bp -> BBp. So they have a far simpler notion of themselves, and they lack the full self- introspective self-awareness of the machines having the induction axioms. Note also that although non löbian universal machine are in principle very simple, they are
still far from trivial.

When you talk about the consciousness of the universal machine, you
require that it be dovetailing, in order for it believe t, Bt, etc.,
correct?

Not at all. That is again a consequence of my ambiguous use of "that" above. The universal dovetailer is not a universal machine, and usually, universal machine does not dovetail. "Bp is true" means "the machine justifies or believes p", and if "Bp is asserted by the machine", it means that the machine justifies or believes that "the machine justifies or believes p"".




A virgin universal machine represents pure potential, and
attributing consciousness to pure potential is no different from
saying (in MAT) that all matter is conscious.

The assertion that matter is conscious does not make sense, for me. Only a machine, or a person vehiculated by that machine, can be said conscious. Some years ago, I would have said that you need Löbianity to have a person, but now, I think that the universal machine can be conscious, and so I have to enlarge my notion of person. It is not to hard, because a universal machine is not a completely trivial machine. Sure, just addition and multiplication gives rise to universality, but the whole point of Gödel & Co. is that addition and multiplication are only apparently trivial. In fact they are not trivial at all. Number theorists intuit this from their working familiarity of numbers (like the quasi random primes), but it is an hard work for a logician, or its students, to show that addition+multiplication are Turing universal. And yes, it attaches consciousness to a potential. As I said, this is counterintuitive, mainly because that consciousness is necessarily out of time, space, or anything physical.






<snip>
In my view of things,
bacteria and viruses are not conscious because they lack a nervous
system that would satisfy the cybernetic organization I have in mind.
I am interested in your proof they are universal, btw.

We agree, I think. All universal machine have a sophisticate, yet sometimes hidden in a subtle apparent simplicity, cybernetic organization. Bacteria have very complex series of regulator genes, which make it possible to
program them for addition and multiplication (or simpler, but still
universal tasks). Viruses too, at least in combination with their hosts. I think also that an eukaryotic cells are already the result of a little
bacteria colony, so that we are swarms of bacteria, somehow.
The cybernetic organization does not need neurons, it can use genes and
"meta-genes" (genes regulating the action of other genes). In fact a
bacteria like E. Coli, is an incredibly complex structure, with very subtle
self-regulating actions.

I see, yes, and actually I want to say I remember hearing about
research that involved programming bacteria for some task, but I could
be wrong.

What has become so appealing about cybernetics to me is that it tries
to characterize systems in terms of information flows, which may be
implemented in any kind of substrate (or none at all, as in platonia!)

Well, with UDA you should be able to see that substrate can't help. To introduce substrate can only hinder the search of a solution to the MB problem.




I'm also
wondering if you have an english-language explanation of the MGA... I
recall seeing one a long time ago.

Try with this:
http://old.nabble.com/MGA-1-td20566948.html

Let me now if you have a problem.

Thanks, that is a very effective argument. The one thing I didn't
understand very well was Maudlin's argument... is there a meaty
summary of that argument somewhere?  I don't get how the
counterfactuals can be dealt with by such minimal additions to the
machine.


On an unrelated (to this thread) topic, I have a question about 1p
indeterminacy. You say the universe as we experience it is a sum on
the computational histories of an infinity of programs running on the
UD.

Yes. This is the UDA conclusion.



And that what makes the universe consistently communicable from
one person to another is the "gluing properties" of such histories.
Can you explain "gluing properties"? Is there a mathematical
formalization of that concept?

Well, not yet really. I leave this for the next generation :)
As I did explain to Stephen, to formalize it in the AUDA, you need to define a tensor product in the matter hypostases, and for this is you need some sophisticated semantics for the Z and X logics. Progresses have been done, but it lead toward difficult mathematical questions. Actually this is a problem even for quantum mechanicians, and solution already exists in the frame of some logic (by Girard, but also Kaufmann (in knot theory!) and Abramski, linking knot theory and quantum statistic, but it would still be treachery to use them directly without extracting them from the self-reference logics (which would threat the theory of qualia, which needs to extract quanta and qualia simultaneously from self-reference).

So, I use "glueing" in the intuitive sense that you can extract from the UDA. Basically two dreams (computations seen from inside, that is from first person points of view/hypostases) by different subjects will glue if there is a reality (or just locally: a larger computation) generating those two computations, in some "natural way". The usual instinctive root of gluing dreams, is the idea that there is a common geometrical reality. But that simple idea is not available in the UD, or in arithmetic, given that there are infinitely many computations, and no primitive geometry at all. Technically it means that we have to extract a notion of resource (linearity), and of tensor product (interaction). The logic of the material hypostases are very promising for doing that (or at least they show that the impossibility of this is hard to prove, and this shows that the white rabbits might be hunted away in the comp frame). We can come back on this, I have to go now.

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to