On Wed, May 3, 2017 at 4:51 PM, Bruno Marchal <[email protected]> wrote:
>
> On 03 May 2017, at 15:21, Telmo Menezes wrote:
>
>>>>> I think that mechanism gives the most of what we can hope for an
>>>>> explanation
>>>>> of what consciousness is.
>>>>>
>>>>> A number e can refer to itself and develop true belief about itself,
>>>>> including some guess in its relative consistency.
>>>>
>>>>
>>>>
>>>> I can understand self-referentiality, and at the same time that there
>>>> is "something" to it that is profound but not fully graspable -- as
>>>> Hofstadter talks about with his "strange loops".
>>>>
>>>>> Then the theory explains
>>>>> why any Gödel-Löbian machine can access to the truth that such belief
>>>>> can
>>>>> be
>>>>> correctly (but that is not seen by the machine, only by god/truth)
>>>>> related
>>>>> to the truth, but only in a non communicable way. So the machine knows
>>>>> truth, that she is unable to justify, and can only seem mysterious.
>>>>
>>>>
>>>>
>>>> I am ok with this.
>>>
>>>
>>>
>>> Then, it is weirder for me why you are not convinced by the machine's
>>> explanation of consciousness.
>>
>>
>> But you conclude yourself, that the machine knows truth, that is
>> unable to justify, and can only seem mysterious. The fact that I find
>> consciousness mysterious isn't exactly what you would expect?
>
>
> Yes. But now we are in the Gödelian trap. Interpret (momentarily)
> "consciousness" by "consistency" (<>t), and justify x by "prove x" ([]x).
>
> We seem to agree with ~[]<>t  (we cannot
> justify/explain/rationally-believe-in consciousness, so there is a mystery)
>
> Then the machine explanation comes: <>t -> ~[]<>t (the machine proves that
> if she is consistent then she cannot prove it), and similarly, my
> explanation of the "mystery" is that if we are conscious we can understand
> that we cannot justify it.

Ok, I have no problem with any of this.

> So there is a mystery (a non justifiable truth), but in the cadre of the
> mechanist hypothesis, we can explain why there is necessarily a mystery. The
> mystery remains "lived" from the first person perspective, but we can
> understand, even in the 3p view, that if mechanism is true then it is
> expected that we feel it as mysterious. Eventually, it is no more mysterious
> than our belief in numbers.

But our belief in numbers is pretty mysterious, no?

>
>
>
>
>
>>
>>>
>>>>
>>>>> That does not explains the whole of consciousness, but that reduce its
>>>>> mystery to the mystery of our belief in anything Turing universal, like
>>>>> the
>>>>> numbers.
>>>>>
>>>>> But then again, the numbers explains, by themselves, why if you belief
>>>>> in
>>>>> anything less than them, you cannot get them, and so justify their
>>>>> mysterious character. We don't know, and it is the fate of any machine
>>>>> to
>>>>> not know that.
>>>>>
>>>>> Don't mind to much. I am not sure if what you miss is a part of
>>>>> mathematical
>>>>> logic, or something about consciousness.
>>>>
>>>>
>>>>
>>>> Again, I am convinced by your explanation of why the mystery exists.
>>>
>>>
>>>
>>> The mystery is our understanding or belief (in apparently a finite time)
>>> of
>>> elementary arithmetic.
>>>
>>>
>>>
>>>
>>>> For me, the hard problem remains: you talk about mathematical
>>>> constructs.
>>>
>>>
>>>
>>> Only half of the time, unless you put mathematical truth in the
>>> mathematical
>>> construct, something typically impossible to do, except for some
>>> approximation, for theories much simpler than ourself. I guess you know
>>> the
>>> difference between the true fact that 2+2=4, and the much weaker fact
>>> that
>>> some machine or theory believes or prove that 2+2=4. In fact the word
>>> "mathematical construct" is a bit ambiguous. The semantic in general is
>>> not
>>> a construct, when we do mathematics, but partial semantic can be
>>> associated
>>> to mathematical construct, when we do metamathematics (mathematical
>>> logic),
>>> but this is due to the fact that we approximate meaning by "mathematical
>>> construct" (which are most often infinite and non computable mathematical
>>> object).
>>>
>>>
>>>
>>>
>>>> Physicalists talk about emergence from complex
>>>> interactions of matter. I remain baffled and ask you the same question
>>>> that I ask physicalists: what is the first principle from where
>>>> consciousness arises?
>>>
>>>
>>>
>>> Truth. That cannot be a mathematical construct (provably so if
>>> computationalism is true). It is not 3p definable.
>>
>>
>> Aren't you just renaming the mystery?
>
>
> Truth is much more general than consciousness. All logicians and
> mathematicians believe in (arithmetical) truth, but few would even dare to
> use a term like consciousness. Truth has been very well treated by Tarski,
> and I use that (sometimes implicitly, sometimes more explicitly). It is a
> key notion, which is on the side of semantic, or model theory, unlike
> provable and consistency, which admit arithmetical definition. Arithmetical
> truth can still be treated mathematically, at the meta-level, using set
> theory or second-order logic. Consciousness/knowledge remain more
> problematical, and mix syntax and semantics, like with the Theaetetus'
> definition []p & p. We cannot define this in arithmetic. We would need []p &
> true(p), but if true p was definable, we would be able to get a sentence k
> provably equivalent with its falsity (PA would prove p <-> ~true(p), leading
> to inconsistency).

Ok.
The salient thing for me here is that you tend to place consciousness
and knowledge as very close concepts. While I agree that consciousness
is a form of knowledge, for me the big mystery is closer to asking
"who knows?" or "what knows?".

>>
>>> The whole key is in the theorem that ([]p & p) does not admit a predicate
>>> definable to any machine from which "[]p & p" is (meta)defined.
>>
>>
>> I understood this as a key to understanding why certain things cannot
>> be known, but not to knowing them...
>
>
> Well, "[]p & p" obeys to the logic of knowledge. So PA knows all its
> theorem, for example. The difficulty with consciousness, is that it is close
> to consistency (<>t)  in the 3p view, and close to the triviality (<>t v t)
> in the 1p view. That makes consciousness "obvious" for the machine, and
> unprovable, once the machine approximates it by any third person notion.

Ok.

>
>
>
>>
>>>
>>>>
>>>> I confess I have a hard time formulating the question correctly. I
>>>> feel that what I am trying to ask is so fundamentally simple that it
>>>> becomes hard to write the real question.
>>>
>>>
>>>
>>> That is common when we dig on notion like truth and consciousness. Those
>>> notion are too much obvious from the 1p view, and almost non intelligible
>>> in
>>> the 3p view, which explains why materialist want to eliminate them.
>>
>>
>> Ok.
>> Yes, I came across this over and over. Materalists want to eliminate
>> the question because they can sense that it is subversive to their
>> belief system.
>
>
> Plausibly so.
>
>
>
>
>
>>
>>>>
>>>>> The core of the explanation is in
>>>>> the G/G* separation, and its inheritance by the intelligible and
>>>>> sensible
>>>>> matter. We might come back at this some day or another. I am of course
>>>>> very
>>>>> interested in trying to see what you miss here. The explanation is like
>>>>> the
>>>>> cow koan: the head of the cow go through the window, like the legs and
>>>>> the
>>>>> truncs, but not the tail. That will play a role also in the fact that
>>>>> computationalism is a theology: the soul of the machine cannot
>>>>> understand
>>>>> rationally  that she will be resurrect. That is the fun of it: the soul
>>>>> of
>>>>> the machine says "no" to the doctor, until some leap of faith in some
>>>>> situation.
>>>>
>>>>
>>>>
>>>> This is harder for me to follow, but I think I follow you on the
>>>> "barriers to knowledge".
>>>>
>>>> I definitely don't understand the cow koan!
>>>
>>>
>>>
>>> The idea is that about truth and consciousness we can explain everything,
>>> except for a tiny detail. But with computationalism, we can explain why
>>> they
>>> should  remain a tiny detail which has to be NOT explainable from the
>>> machine's pov.
>>
>>
>> Ok, thanks! It's a nice koan :)
>>
>>>
>>>
>>>>
>>>>> Maybe I will just ask you this. 1) Do you agree that consciousness is a
>>>>> form
>>>>> of knowledge?
>>>>
>>>>
>>>>
>>>> I'm not sure. I think that I know that I am consciousness, but that
>>>> consciousness itself is unlike anything else that I can talk about.
>>>>
>>>> I am inclined to think that consciousness = existence. Perhaps it's
>>>> such a simple and fundamental thing that it becomes almost impossible
>>>> to talk about it.
>>>
>>>
>>>
>>> Consciousness is the 1p feeling that there is something real. I am not
>>> sure
>>> why consciousness would be existence. There are things which exists and
>>> are
>>> not conscious.
>>
>>
>> What I mean is more along the lines of: are there things that exist
>> outside of the content of someone's consciousness?
>> Call it collective solipsism, maybe...
>
>
> With mechanism, we need to believe that phi_i(j) is defined or is not
> defined independently of us, for any choice of an acceptable enumeration of
> the partial recursive function. We need to be platonist/realist on the
> sigma_1 (and pi_1, the negation of sigma_1) propositions. We need to believe
> in the consequence of Robinson Arithmetic (at least).
>
>
>
>>
>> But the reason why I wrote it has to do with the radical simplicity
>> that I attribute to consciousness. It exists, and I don't know what
>> else I can say about it (from my 1p experience).
>
>
> Yes, here you talk like the subject described by S4Grz. It is basically <>t
> v t. It is "there is a reality of (0 = 0)". But the machine cannot equate
> that consciousness with this, unless she buy mechanism (against its first
> person instinct) and try to make a theory. In that case you add something
> highly non trivial to your consciousness, which is the bet that it is
> sustained by some relative digital processing, and maintained through the
> doctor's brain transplant.

Is it fair to call comp a theory? Do you think it can be falsified?
I believe we talked about this before, sorry in that case.

>>
>>> Consciousness is more what we need to give meaning to word like
>>> "meaning".
>>> It is on the semantical side, like truth.
>>>
>>> Do you agree that consciousness is undoubtable and unjustifiable. I
>>> cannot
>>> doubt consciousness because doubt requires consciousness, and I cannot
>>> justify consciousness (cf the conceptual existence of philosophical
>>> zombie).
>>
>>
>> I agree that it is undoubtable. In fact, I think that it is the only
>> thing I can think of that is undoubtable.
>
>
> That is very nice, as it would make its undoubtability useful for an
> axiomatic definition of consciousness. It would be define by what is true
> and undoubtable.

Ok, nice.

>> I don't know if I agree that it is unjustifiable. I feel you are using
>> "unjustifiable" as a technical term that perhaps I don't fully
>> understand.
>
>
> I mean that you cannot rationally convince another conscious entity that you
> are conscious.  You will need poetry, music, art, etc. There are no 3p
> discourse, nor 3p test which would prove that some entity are conscious. Of
> course, we can add evidence for betting that some entity is conscious, and
> we have some instinctive empathy, it seems, for our human fellows.

Ok, then I agree.

>
>
>
>
>
>>
>>>
>>>
>>>
>>>
>>>>
>>>>> 2) that knowable obeys the S4 axioms?
>>>>>
>>>>> S4 =
>>>>>
>>>>> [](A->B) -> ([]A -> []B)  K
>>>>> []A->A                            T
>>>>> []A -> [][]A                      4
>>>>>
>>>>> Then incompleteness explains why this works with "[]" payed by
>>>>> provability,
>>>>
>>>>
>>>>
>>>> I don't understand this sentence, what do you mean by "payed by
>>>> provability"?
>>>
>>>
>>>
>>> I meant "played by provability". Sorry for the typo. It means that "[]"
>>> is
>>> for Gödel's provability predicate. It is Gödel's incompleteness which
>>> makes
>>> the box behaving like a belief, and unlike a knowledge.
>>
>>
>> Ok.
>>
>>> Imagine that incompleteness would have been false. Then we would have
>>> "[]p
>>> <-> []p & p" not only true, but provable by the machine, and the logic of
>>> the 3p self would have been the same as the logic of the 1p-self, making
>>> impossible to associate to a machine a different notion for its 1p and 3p
>>> points of view.
>>
>>
>> Ok.
>>
>>> It is because []p -> p is NOT provable by the machine, that the logics of
>>> []p and []p & p differs. Without incompleteness the 8 hypostases would
>>> collapse.
>>
>>
>> Ok.
>>
>>>
>>>
>>>
>>>>
>>>>> and gives a temporal non nameable subject, which cannot identify itself
>>>>> with
>>>>> any third person notion. Looks like my poor soul to me :)
>>>>
>>>>
>>>>
>>>> I agree that what I call "consciousness" is something that cannot
>>>> identify itself with third person notions. This is what leads me to
>>>> suspect that it is not something that can be studied scientifically.
>>>
>>>
>>>
>>> When doing science, we cannot invoke first person notion (or god, or
>>> truth),
>>> but there is no reason why we cannot make a 3p theory *on* those notion,
>>> and
>>> do the 3p reasoning and the 3p experimental verification for the
>>> 3p-sharable
>>> part of the theory.
>>>
>>> That is exactly the case with computationalism. We cannot define
>>> consciousness, but we know pretty well what it means for each of us, and
>>> can
>>> make hypothesis on it (like "yes doctor"), and study the consequence.
>>
>>
>> Ok, this is how you convinced me that computationalism and physicalism
>> are incompatible.
>
>
> OK.
>
>
>
>>
>>> Similarly, Pean arithmetic cannot define arithmetical truth, and cannot
>>> define knowledge, but can simulate truth by the assertative p, and
>>> conjunct
>>> it, for each arithmetical sentence to its boxed presentation, and so even
>>> PA
>>> can see that it obey S4, which is usually a good axiomatics for
>>> knowledge.
>>> And that explains why, if the machine tries the Maharshi koan "Who am I",
>>> she might  get the ineffable point. In fact, if she succeeds to remain
>>> correct all along the introspection, she cannot avoid the "ineffable
>>> answer".
>>
>>
>> I have to think about this, but I believe I see your point.
>>
>> Brent argues that AI will dissolve the hard question. I think that
>> people know intuitively that it will not. This is what pop-culture
>> works such as "Blade Runner" are about.
>
>
> Like Minski, or McCarthy, I think that not only the problem will not
> dissolve, but the machines will work on it, and have hard time with it. Of
> course, we can dissolve the problem by burning the machine alive when they
> contradict the government theory, which is the usual human method.

Yes, this is more or less what happens in Blade Runner, except that
they shoot the machines.

>
>
>
>
>>
>>> I think that what you need to keep in mind, and understand, is that
>>> despite
>>> its simple 3p meta-aspect, "[]p & p" refers to something which does not
>>> admit any 3p explicit definition. That is also the reason why I insist so
>>> much that "[]p & p" is a theological notion, and why saying "yes" to a
>>> doctor is a theological act of faith. The machine is simply unable to
>>> prove
>>> for each p that []p and  []p & p are equivalent. Only its own G* knows
>>> that.
>>
>>
>> I feel that, in the end, this is all I was saying. That you don't have
>> a 3p definition of consciousness, although you might be able to show
>> why such a definition is not possible.
>
>
> OK. Then the point is in the consequence of what I call the third theorem of
> Gödel, which is that PA (or Gödel's Principia Mathematica) can prove its own
> second-incompleteness theorem. Gödel announced it in his 1931 paper, but it
> was proved by Hilbert and Bernays later, and embellished and generalized by
> Löb.
>
> It might be useful to have in mind the four theorems of Gödel
>
> 0) 1930: the completeness theorem for predicate calculus. A first order
> theory is consistent iff it has a model (equivalently if a first order
> theory proves some formula A, then A is true in all models of that theory).
>
> 1) 1931: first incompleteness theorem:  If T is a "rich enough" theory there
> will be an undecidable sentence (yet true in the standard model of the
> theory)
>
> 2) 1931: second incompleteness theorem if T is rich enough and consistent T
> cannot prove that (personal) consistency.
>
> 3) The Gödel-Hilbert-Bernays-Löb theorem: if T is rich enough, T proves <>t
> -> ~[]<>t  (the formalisation of the second theorem made by or in T).
>
> The "machine theory on machine consciousness" uses them all, and others, but
> the most important one is the "3)". It can be used quickly to refute
> Luca/Penrose type of argument based on Gödel's incompleteness against
> Mechanism. It shows that the machine PA found Gödel's incompleteness before
> Gödel, so to speak.

The only thing that confuses me here is the direct use of t.

I can follow:
<>p -> ~[]<>p

Intuitively this makes sense if I say, for example, that p means "I am sane".

I understand things like t -> f in the context of reductio ad
absurdum, but the direct use of t in the expression above still
baffles me.

Telmo.

> Bruno
>
>
>>
>> Telmo.
>>
>>> Bruno
>>>
>>>
>>>
>>>
>>>
>>>
>>>>
>>>> Telmo.
>>>>
>>>>>
>>>>> Bruno
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> What I don't like about your position is this: just because science
>>>>>> cannot address (or as not so far been able to address) a mystery,
>>>>>> doesn't mean that this mystery becomes irrelevant or that we can
>>>>>> pretend it doesn't exist -- or worse, that we should pretend that we
>>>>>> have a viable theory when we don't. This is essentially what makes me
>>>>>> agnostic instead of an atheist: I recognise that the big mystery is
>>>>>> there. Labelling people that recognise that the mystery is there as
>>>>>> lunatics does not serve intellectual rigor.
>>>>>>
>>>>>>>>
>>>>>>>> Then we can talk about evidence.
>>>>>>>>
>>>>>>>>> Second. An argument from authority is not necessarily a reason to
>>>>>>>>> reject
>>>>>>>>> that argument. Because life is short and we cannot be experts in
>>>>>>>>> absolutely
>>>>>>>>> everything, we frequently have to rely on authorities -- people who
>>>>>>>>> are
>>>>>>>>> recognized experts in the relevant field. I am confident that when
>>>>>>>>> I
>>>>>>>>> drive
>>>>>>>>> across this bridge it will not collapse under the weight of my car
>>>>>>>>> because I
>>>>>>>>> trust the expertise of the engineers who designed and constructed
>>>>>>>>> the
>>>>>>>>> bridge. In other words, I rely on the  relevant authorities for my
>>>>>>>>> conclusion that this bridge is safe. An argument from authority is
>>>>>>>>> unsound
>>>>>>>>> only if the quoted authorities are themselves not reliable -- they
>>>>>>>>> are
>>>>>>>>> not
>>>>>>>>> experts in the relevant field, and/or their supposed qualifications
>>>>>>>>> are
>>>>>>>>> bogus. There are many examples of this -- like relying on President
>>>>>>>>> Trump's
>>>>>>>>> assessment of anthropogenic global warming, etc, etc.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> I agree that arguments from authority are necessary to save time,
>>>>>>>> but
>>>>>>>> in the context of a debate about a mystery of nature for which no
>>>>>>>> strong and widely-accepted scientific theories exist, it is
>>>>>>>> nonsensical to invoke authority.
>>>>>>>>
>>>>>>>> Also, this is not a place where people come to have their car
>>>>>>>> repaired, or their doctor appointment. This is a discussion forum
>>>>>>>> about the unsolved deep mysteries of reality.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Which is exactly the point.  Because their mechanic can repair their
>>>>>>> car
>>>>>>> they suppose we have explained cars - but we have only found the
>>>>>>> Lagrangian
>>>>>>> that described them.  When we can write the programs that produce
>>>>>>> "conscious" behavior of whatever kind we choose, cheerful, autistic,
>>>>>>> morose,
>>>>>>> lustful, humorous,..., then most people will think we have explained
>>>>>>> consciousness.  Mystics will still claim there's a "hard problem".
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> This feels like thought policing. Of course the mystery is still
>>>>>> there, and it's huge! Why am I conscious? I can't think of a more
>>>>>> compelling mystery. Why is it so hard to say: "I don't know"?
>>>>>>
>>>>>> Congrats on your daughter's wedding!
>>>>>>
>>>>>> Telmo.
>>>>>>
>>>>>>> Brent
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> You received this message because you are subscribed to the Google
>>>>>>> Groups
>>>>>>> "Everything List" group.
>>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>>> send
>>>>>>> an
>>>>>>> email to [email protected].
>>>>>>> To post to this group, send email to
>>>>>>> [email protected].
>>>>>>> Visit this group at https://groups.google.com/group/everything-list.
>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups
>>>>>> "Everything List" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>>> an
>>>>>> email to [email protected].
>>>>>> To post to this group, send email to [email protected].
>>>>>> Visit this group at https://groups.google.com/group/everything-list.
>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> http://iridia.ulb.ac.be/~marchal/
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups
>>>>> "Everything List" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an
>>>>> email to [email protected].
>>>>> To post to this group, send email to [email protected].
>>>>> Visit this group at https://groups.google.com/group/everything-list.
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>>
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups
>>>> "Everything List" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an
>>>> email to [email protected].
>>>> To post to this group, send email to [email protected].
>>>> Visit this group at https://groups.google.com/group/everything-list.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>
>>>
>>> http://iridia.ulb.ac.be/~marchal/
>>>
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to [email protected].
>>> To post to this group, send email to [email protected].
>>> Visit this group at https://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To post to this group, send email to [email protected].
>> Visit this group at https://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to