On 5 January 2018 at 21:51, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 1/5/2018 6:48 AM, David Nyman wrote:
>
> On 5 January 2018 at 14:06, Jason Resch <jasonre...@gmail.com> wrote:
>
>>
>>
>> On Friday, January 5, 2018, David Nyman <david.ny...@gmail.com> wrote:
>>
>>>
>>>
>>> On 5 Jan 2018 03:22, "Bruce Kellett" <bhkell...@optusnet.com.au> wrote:
>>>
>>> On 4/01/2018 11:59 pm, Bruno Marchal wrote:
>>>
>>>> On Jan 4, 2018, at 12:50 PM, Bruce Kellett <bhkell...@optusnet.com.au>
>>>>> wrote:
>>>>>
>>>>> On 4/01/2018 12:30 am, Bruno Marchal wrote:
>>>>>
>>>>>> On 29 Dec 2017, at 01:29, Bruce Kellett wrote:
>>>>>>
>>>>>>> On 29/12/2017 10:14 am, Russell Standish wrote:
>>>>>>>
>>>>>>>> This is computationalism - the idea that our human consciousness
>>>>>>>> _is_
>>>>>>>> a computation (and nothing but a computation).
>>>>>>>>
>>>>>>> What distinguishes a conscious computation within the class of all
>>>>>>> computations? After all, not all computations are conscious.
>>>>>>>
>>>>>> Universality seems enough.
>>>>>>
>>>>> What is a universal computation? From what you say below, universality
>>>>> appears to be a property of a machine, not of a computation.
>>>>>
>>>> OK, universality is an attribute of a machine, relatively to some
>>>> universal machinery, like arithmetic or physics.
>>>>
>>>>
>>>> But just universality gives rise only to a highly non standard,
>>>>>> dissociative, form of consciousness. It might correspond to the cosmic
>>>>>> consciousness alluded by people living highly altered state of
>>>>>> consciousness.
>>>>>>
>>>>>> You need Löbianity to get *self-consciousness*, or reflexive
>>>>>> consciousness. A machine is Löbian when its universality is knowable by 
>>>>>> it.
>>>>>> Equivalently, when the machine is universal and can prove its own "Löb's
>>>>>> formula". []([]p -> p) -> []p. Note that the second incompleteness 
>>>>>> theorem
>>>>>> is the particular case with p = f (f = "0≠1").
>>>>>>
>>>>> Löbanity is a property of the machine, not of the computation.
>>>>>
>>>> Yes. The same. I was talking about machine or about the person
>>>> supported by those machine. No machine (as conceived as a code, number,
>>>> physical object) can ever be conscious or think. It is always a more
>>>> abstract notion implemented through some machinery which do the thinking.
>>>>
>>>> Similarly a computation cannot be conscious, but it can support a
>>>> person, which is the one having genuinely the thinking or conscious
>>>> attribute.
>>>>
>>>
>>> The original suggestion by Russell was that "our human consciousness
>>> _is_ a computation (and nothing but a computation)."
>>>
>>> You seem to be steering away from Russell's straightforward position. If
>>> human consciousness is a computation, then the computation is conscious (it
>>> is an identity thesis). You say that the computation cannot be conscious,
>>> but can support a person. It is difficult to see this as anything other
>>> than the introduction of a dualistic element: the computation supports a
>>> conscious person, but is not itself conscious? So wherein does
>>> consciousness exist? You are introducing some unspecified magic into the
>>> equation. And what characterizes those computations that can support a
>>> person from those that cannot?
>>>
>>>
>>> Let me see if I can attempt some sort of answer Bruce. The utility of
>>> the notion of the 'universal' mechanism is precisely its ability to emulate
>>> all other finitely computable mechanisms. But not all such mechanisms can
>>> be associated with persons. What distinguishes this particular class, as
>>> exemplified by Bruno's modal logic toy models is, in the first instance,
>>> self-referentiality. This first-personal characteristic is, if you like,
>>> the fixed point on which every other feature is centred. Then, with respect
>>> to this fixed point of view, a distinction can be made between what is
>>> 'believed' (essentially, provably true as a theorem and hence communicable)
>>> by the machine and what is true *about* the machine (essentially, not
>>> provable as a theorem and hence not communicable, but nonetheless
>>> 'epistemically' true).
>>>
>>
>>
>> Given the above, what would be the shortest program with these properties?
>>
>> Is the Mars Rover conscious?
>>
>
> ​Don't forget that the acid test is Yes (or No) Doctor.​ So AFAICT the
> only way to assess whether the Mars Rover is conscious would be against the
> same criteria as for any other independent agent. With human persons,
> assuming truthfulness and consistency, if for example you tell me that you
> see an apple, and that by this you do not alternatively mean to say that
> you have somehow indirectly figured out that your body and an apple are in
> some third-person relation, I typically accept your statement as the
> behavioural corollary of the epistemic truth to which you refer.
>
>
> To take a more realistic example.
>

​I do so love your appropriation of ​the terms 'real' and 'realistic' to
your own theories.

Suppose the rover says, "My battery is low."  You might consult telemetry
> and see that the rover the battery is indeed low.  You would conclude that
> the rover has some self-awareness.
>
> Suppose the rover says "I see an interesting rock formation 200 meters
> away on a heading of 105deg."  You as a rover engineer would consult
> telemetry video and see a rock formation at that location, but not being a
> geologist you might just defer to the rover's judgement and superior
> instrumentation as to whether it was interesting.  So you're attributing
> internal judgements to the rover which are not communicable to
> you...although they may be to someone else more familiar with geology and
> how the rover's instruments work.
>
> Suppose the rover turns 360deg in place.  You consult the telemetry and
> see that the drive wheel on one side was not turning but is not working.
> You query the rover and it replies, "I don't know why I turned around."
> Then you infer that the rover has a subconscious.
>
> None of this requires any knowledge of whether the rover can prove
> Goedel's theorem, can't know what machine it is, or knows Loeb's theorem.
>

​Fine. I'm perfectly happy for you to propose a theory of mind that is, in
the final analysis, entirely coterminous with intelligent behaviour. But
it's not a counter-argument to computationalism or its counter-intuitive
implications.​ Which is a pity because the real motive for my pitiful
attempts at articulating what is a frankly incomplete understanding of the
matter is in the anticipation of discovering convincing counter-arguments.
I still have hopes.

David

PS By the way, I see that unfortunately in one of my other responses to you
I typed irreducible instead of reducible (bloody tablet). You may well have
already noticed and made the necessary adjustment, but if you haven't, it
should be fairly obvious.

>
>
> Brent
>
>
>
> I suppose, notwithstanding this, if I were a computationalist Doctor, I
> might have some theory about the shortest program with the relevant
> characteristics to be associated with epistemic phenomena of the relevant
> sort. But I'm not and I don't, so please don't ask me to operate.
>
> David
>
> Jason
>>
>>
>>>
>>> Any machine possessing the foregoing features is in principle conscious,
>>> in the sense of having implicit self-referential epistemic access to
>>> non-communicable truths that are nonetheless entailed by its explicit and
>>> communicable 'beliefs'. Of course it's a long step from the toy model to
>>> the human person, but I think one can still discern the thread. The
>>> machine's 'beliefs' can now be represented as communicable and explicit
>>> third-person behaviour with respect to a 'physical' environment in which it
>>> is embedded; however, associated with this behaviour there are true but
>>> non-communicable epistemic phenomena to which the behaviour indirectly
>>> refers (i.e. they are true *about* the machine). An example of this would
>>> be any statement (or judgment, in the usual terminology of the field) you
>>> might make about your own phenomenal experience, as in "I see an apple". In
>>> behavioral terms, this statement or judgment is cashed out purely as
>>> physical action (neurocognitive, neuromuscular, etc). In epistemic terms
>>> however it cashes out as a truth (tautological and hence undoubtable)
>>> that's implied by that same behaviour - in other words that it is in fact
>>> the case that you really do see an apple.
>>>
>>> It's important to take into account that all the terms employed are
>>> given precise technical sense in mathematical logic, emulable in
>>> computation. Bruno's computational schema is an attempt (motivated by the
>>> CTM) to derive physics, or rather its consistent appearance, from the more
>>> general ontology of computability. Self-referential machines are central to
>>> this attempt, and the epistemic consequences of their 'beliefs' are what
>>> ultimately distinguish persons from the associated machine behaviour. Since
>>> many machines implement, both behaviourally and 'truthfully' so to speak,
>>> essentially the same person, over some 'fuzzy spectrum' of characteristics
>>> and outcomes, it can consequently be said that persons so defined possess
>>> many bodies.
>>>
>>> Does this help at all?
>>>
>>> David
>>>
>>>
>>>
>>> Bruce
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-list+unsubscr...@googlegroups.com.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-list+unsubscr...@googlegroups.com.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at https://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to