Re: A new theory of consciousness: conditionalism

2023-08-26 Thread Jason Resch
Thank you John for your thoughts. I few notes below:

On Sat, Aug 26, 2023 at 7:17 AM John Clark  wrote:

> On Fri, Aug 25, 2023 at 1:47 PM Jason Resch  wrote:
>
> *> At a high level, states of consciousness are states of knowledge,*
>>
>
> That is certainly true, but what about the reverse, does a high state of
> knowledge imply consciousness?  I'll never be able to prove it but I
> believe it does but of course for this idea to be practical there must be
> some way of demonstrating that the thing in question does indeed have a
> high state of knowledge, and the test for that is the Turing Test, and
> the fact that my fellow human beings have passed the Turing test is the
> only reason I believe that I am NOT the only conscious being in the
> universe.
>

Yes, I believe there's an identity between states of knowledge and states
of consciousness. That is almost implicit in the definition of
consciousness:
con- means "with"
-scious- means "knowledge"
-ness means "the state of being"
con-scious-ness -> the state of being with knowledge.

Then, the question becomes: what is a state of knowledge? How do we
implement or instantiate a knowledge state, physically or otherwise?

My intuition is that it requires a process of differentiation, such that
some truth becomes entangled with the system's existence.


>
> *> A conditional is a means by which a system can enter/reach a state of
>> knowledge (i.e. a state of consciousness) if and only if some fact is true.*
>>
>
> Then "conditional" is not a useful philosophical term because you could be
> conscious of and know a lot about Greek mythology. but none of it is true
> except for the fact that Greek mythology is about Greek mythology.
>

Yes. Here, the truth doesn't have to be some objective truth, it can be
truth of what causes ones mind to reach a particular state. E.g., here it
would be the truth of what particular sensory data came into the scholar's
eyes as he read a book of Greek mythology.



> >  *Consciousness is revealed as an immaterial, ephemeral relation, not
>> any particular physical thing we can point at or hold.*
>>
>
> I mostly agree with that but that doesn't imply there's anything mystical
> going on, information is also immaterial and you can't point to *ANY
> PARTICULAR* physical thing
>

I agree.

 (although you can always point to *SOME *physical thing) and I believe
> it's a brute fact that consciousness is the way information feels when it
> is being processed intelligently.
>

I like this analogy, but I think it is incomplete. Can information (by
itself) feel? Can information (by itself) have meaning?

I see value in making a distinction between information and "the system to
be informed." I think the pair are necessary for there to be meaning, or
consciousness.


However there is nothing ephemeral about information, as far as we can tell
> the laws of physics are unitary, that is information can't be destroyed
> and the probability of all possible outcomes must add up to 100%. For a
> while Stephen Hawking thought that Black Holes destroyed information but he
> later changed his mind, Kip Thorne still thinks it may do so but he is in
> the minority.
>

I agree information can't be destroyed. But note that what I called
ephemeral was the conditional relation, which (at least usually) seems to
occur and last during a short time.



>
> *> All we need to do is link some action to a state of knowledge.*
>>
>
> At the most fundamental level that pretty much defines what a computer
> programmer does to make a living.
>

Yes.



> * > It shows the close relationship between consciousness and information,
>> where information is defined as "a difference that makes a difference",*
>>
>
> And the smallest difference that still makes a difference is the
> difference between one and zero, or on and off.
>

The bit is the simplest unit of information, but interestingly, there can
also be fractional bits. For example, if there's a 75% chance of some
event, like two coin tossings not both being heads, and I tell you that two
coin tossings were not both heads, then I have only
communicated -log2(0.75) ~= 0.415 bits of information to you.



> > *It shows a close relationship between consciousness and
>> computationalism,*
>>
>
> I strongly agree with that,  it makes no difference if the thing doing
> that computation is carbon-based and wet and squishy, or silicon-based and
> dry and hard.
>

Absolutely  


>  >  It is also supportive of functionalism and it's multiple
>> realizability, as there are many possibile physical arrangements that lead
>> to conditionals.
>
>
> YES!
>
> *> It's clear there neural networks firings is all about conditionals and
>> combining them in whether or not a neuron will fire and which other neurons
>> have fired binds up many conditional relations into one larger one. It
>> seems no intelligent (reactive, deliberative, contemplative, reflective,
>> etc.) process can be made that does not contain at least some 

A new theory of consciousness: conditionalism

2023-08-26 Thread John Clark
On Fri, Aug 25, 2023 at 1:47 PM Jason Resch  wrote:

*> At a high level, states of consciousness are states of knowledge,*
>

That is certainly true, but what about the reverse, does a high state of
knowledge imply consciousness?  I'll never be able to prove it but I
believe it does but of course for this idea to be practical there must be
some way of demonstrating that the thing in question does indeed have a
high state of knowledge, and the test for that is the Turing Test, and the
fact that my fellow human beings have passed the Turing test is the only
reason I believe that I am NOT the only conscious being in the universe.

*> A conditional is a means by which a system can enter/reach a state of
> knowledge (i.e. a state of consciousness) if and only if some fact is true.*
>

Then "conditional" is not a useful philosophical term because you could be
conscious of and know a lot about Greek mythology. but none of it is true
except for the fact that Greek mythology is about Greek mythology.

>  *Consciousness is revealed as an immaterial, ephemeral relation, not any
> particular physical thing we can point at or hold.*
>

I mostly agree with that but that doesn't imply there's anything mystical
going on, information is also immaterial and you can't point to *ANY
PARTICULAR* physical thing (although you can always point to *SOME *physical
thing) and I believe it's a brute fact that consciousness is the way
information feels when it is being processed intelligently. However there
is nothing ephemeral about information, as far as we can tell the laws of
physics are unitary, that is information can't be destroyed and the
probability of all possible outcomes must add up to 100%. For a while
Stephen Hawking thought that Black Holes destroyed information but he later
changed his mind, Kip Thorne still thinks it may do so but he is in the
minority.

*> All we need to do is link some action to a state of knowledge.*
>

At the most fundamental level that pretty much defines what a computer
programmer does to make a living.

* > It shows the close relationship between consciousness and information,
> where information is defined as "a difference that makes a difference",*
>

And the smallest difference that still makes a difference is the difference
between one and zero, or on and off.

> *It shows a close relationship between consciousness and
> computationalism,*
>

I strongly agree with that,  it makes no difference if the thing doing that
computation is carbon-based and wet and squishy, or silicon-based and dry
and hard.

 >  It is also supportive of functionalism and it's multiple realizability,
> as there are many possibile physical arrangements that lead to conditionals.


YES!

*> It's clear there neural networks firings is all about conditionals and
> combining them in whether or not a neuron will fire and which other neurons
> have fired binds up many conditional relations into one larger one. It
> seems no intelligent (reactive, deliberative, contemplative, reflective,
> etc.) process can be made that does not contain at least some conditionals.
> As without them, there can be no responsiveness. This explains the
> biological necessity to evolve conditionals and apply them in the guidance
> of behavior. In other words, consciousness (states of knowledge) would be
> strictly necessary for intelligence to evolve.*
>

I agree with all of that.
 John K ClarkSee what's on my new list at  Extropolis

xex

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0Xo4Sc4BWQaY4mBA%2Bn7PCMfi7zEE9TQJkSdK00cS8AMQ%40mail.gmail.com.


Re: A new theory of consciousness: conditionalism

2023-08-26 Thread John Clark
On Fri, Aug 25, 2023 at 1:47 PM Jason Resch  wrote:

*> At a high level, states of consciousness are states of knowledge,*
>

That is certainly true, but what about the reverse, does a high state of
knowledge imply consciousness?  I'll never be able to prove it but I
believe it does but of course for this idea to be practical there must be
some way of demonstrating that the thing in question does indeed have a
high state of knowledge, and the test for that is the Turing Test, and the
fact that my fellow human beings have passed the Turing test is the only
reason I believe that I am NOT the only conscious being in the universe.

*> A conditional is a means by which a system can enter/reach a state of
> knowledge (i.e. a state of consciousness) if and only if some fact is true.*
>

Then "conditional" is not a useful philosophical term because you could be
conscious of and know a lot about Greek mythology. but none of it is true
except for the fact that Greek mythology is about Greek mythology.

>  *Consciousness is revealed as an immaterial, ephemeral relation, not any
> particular physical thing we can point at or hold.*
>

I mostly agree with that but that doesn't imply there's anything mystical
going on, information is also immaterial and you can't point to *ANY
PARTICULAR* physical thing (although you can always point to *SOME *physical
thing) and I believe it's a brute fact that consciousness is the way
information feels when it is being processed intelligently. However there
is nothing ephemeral about information, as far as we can tell the laws of
physics are unitary, that is information can't be destroyed and the
probability of all possible outcomes must add up to 100%. For a while
Stephen Hawking thought that Black Holes destroyed information but he later
changed his mind, Kip Thorne still thinks it may do so but he is in the
minority.

*> All we need to do is link some action to a state of knowledge.*
>

At the most fundamental level that pretty much defines what a computer
programmer does to make a living.

* > It shows the close relationship between consciousness and information,
> where information is defined as "a difference that makes a difference",*
>

And the smallest difference that still makes a difference is the difference
between one and zero, or on and off.

> *It shows a close relationship between consciousness and
> computationalism,*
>

I strongly agree with that,  it makes no difference if the thing doing that
computation is carbon-based and wet and squishy, or silicon-based and dry
and hard.

 >  It is also supportive of functionalism and it's multiple realizability,
> as there are many possibile physical arrangements that lead to conditionals.


YES!

*> It's clear there neural networks firings is all about conditionals and
> combining them in whether or not a neuron will fire and which other neurons
> have fired binds up many conditional relations into one larger one. It
> seems no intelligent (reactive, deliberative, contemplative, reflective,
> etc.) process can be made that does not contain at least some conditionals.
> As without them, there can be no responsiveness. This explains the
> biological necessity to evolve conditionals and apply them in the guidance
> of behavior. In other words, consciousness (states of knowledge) would be
> strictly necessary for intelligence to evolve.*
>

I agree with all of that.
 John K ClarkSee what's on my new list at  Extropolis

xex

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0q60k%3DqoWMbNsAOVxG_qotkyV8TJhN8-vNLoMg7Pu48A%40mail.gmail.com.


Re: A new theory of consciousness: conditionalism

2023-08-25 Thread Stathis Papaioannou
On Sat, 26 Aug 2023 at 03:47, Jason Resch  wrote:

> I would like to propose a theory of consciousness which I think might have
> some merit, but more importantly I would like to see what criticism others
> might have for it.
>
> I have chosen the name "conditionalism" for this theory, as it is based
> loosely on the notion of conditional statements as they appear in both
> regular language, mathematics, and programming languages.
>
> At a high level, states of consciousness are states of knowledge, and
> knowledge is embodied by the existence of some relation to some truth.
>
> A conditional is a means by which a system can enter/reach a state of
> knowledge (i.e. a state of consciousness) if and only if some fact is true.
> A simple example using a programming language:
>
> if (x >= 5) then {
>// knowledge state of x being greater than or equal to 5
> }
>
> I think this way of considering consciousness, as that existing between
> those two braces: { } can explain a lot.
>
> 1. Consciousness is revealed as an immaterial, ephemeral relation, not any
> particular physical thing we can point at or hold.
>
> 2. It provides for a straight-forward way to bind complex states of
> consciousness, though conjunction, for example:
> If (a and b) {
> // knowledge of the simultaneous truth of both a and b
> }
> This allows states of consciousness to be arbitrarily complex and varied.
>
> 3. It explains the causal efficacy of states of consciousness. All we need
> to do is link some action to a state of knowledge. Consciousness is then
> seen as antecedent to, and a prerequisite for, any intelligent behavior.
> For example:
> If (light == color.red) {
> slowDown();
> }
>
> 4. It shows the close relationship between consciousness and information,
> where information is defined as "a difference that makes a difference", as
> conditionals are all about what differences make which differences.
>
> 5. It shows a close relationship between consciousness and
> computationalism, since computations are all about counterfactual and
> conditional relations.
>
> 6. It is also supportive of functionalism and it's multiple realizability,
> as there are many possibile physical arrangements that lead to conditionals.
>
> 7. It's clear there neural networks firings is all about conditionals and
> combining them in whether or not a neuron will fire and which other neurons
> have fired binds up many conditional relations into one larger one.
>
> 8. It seems no intelligent (reactive, deliberative, contemplative,
> reflective, etc.) process can be made that does not contain at least some
> conditionals. As without them, there can be no responsiveness. This
> explains the biological necessity to evolve conditionals and apply them in
> the guidance of behavior. In other words, consciousness (states of
> knowledge) would be strictly necessary for intelligence to evolve.
>

I agree with all this and as usual it is very well put and explained. What
I have difficulty with is the concept of implementation. This is
straightforward if we consider cases where the machine interacts with its
environment, but puzzling when we consider similar physical processes in a
different situation where such interaction is not possible. A certain
sequence of movement of gears and springs may be implementing completely
different computations or experiences in different machines, just as a
certain string of Latin characters might mean different things in different
languages. The semantics seems dependent on the observer, and there may be
multiple possible observers, no observer, in the case of a conscious
computation a self-generated observer, and in the case of an inputless
conscious computation a self-generated observer not dependent on any
external observer or other environmental input.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXONK3NN9OTYMDu6MQbn8SLT8gHV%3D80mRT7QhRBOFfH9g%40mail.gmail.com.


A new theory of consciousness: conditionalism

2023-08-25 Thread Jason Resch
I would like to propose a theory of consciousness which I think might have
some merit, but more importantly I would like to see what criticism others
might have for it.

I have chosen the name "conditionalism" for this theory, as it is based
loosely on the notion of conditional statements as they appear in both
regular language, mathematics, and programming languages.

At a high level, states of consciousness are states of knowledge, and
knowledge is embodied by the existence of some relation to some truth.

A conditional is a means by which a system can enter/reach a state of
knowledge (i.e. a state of consciousness) if and only if some fact is true.
A simple example using a programming language:

if (x >= 5) then {
   // knowledge state of x being greater than or equal to 5
}

I think this way of considering consciousness, as that existing between
those two braces: { } can explain a lot.

1. Consciousness is revealed as an immaterial, ephemeral relation, not any
particular physical thing we can point at or hold.

2. It provides for a straight-forward way to bind complex states of
consciousness, though conjunction, for example:
If (a and b) {
// knowledge of the simultaneous truth of both a and b
}
This allows states of consciousness to be arbitrarily complex and varied.

3. It explains the causal efficacy of states of consciousness. All we need
to do is link some action to a state of knowledge. Consciousness is then
seen as antecedent to, and a prerequisite for, any intelligent behavior.
For example:
If (light == color.red) {
slowDown();
}

4. It shows the close relationship between consciousness and information,
where information is defined as "a difference that makes a difference", as
conditionals are all about what differences make which differences.

5. It shows a close relationship between consciousness and
computationalism, since computations are all about counterfactual and
conditional relations.

6. It is also supportive of functionalism and it's multiple realizability,
as there are many possibile physical arrangements that lead to conditionals.

7. It's clear there neural networks firings is all about conditionals and
combining them in whether or not a neuron will fire and which other neurons
have fired binds up many conditional relations into one larger one.

8. It seems no intelligent (reactive, deliberative, contemplative,
reflective, etc.) process can be made that does not contain at least some
conditionals. As without them, there can be no responsiveness. This
explains the biological necessity to evolve conditionals and apply them in
the guidance of behavior. In other words, consciousness (states of
knowledge) would be strictly necessary for intelligence to evolve.


Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhtNO0jWDAAE167oM%2BGAODWxoh%2Bx2AsHAqz-_iBenxS7w%40mail.gmail.com.


Re: Quantum experiments add weight to a fringe theory of consciousness

2022-04-20 Thread Alan Grayson


On Tuesday, April 19, 2022 at 11:29:42 AM UTC-6 meeke...@gmail.com wrote:

> Oh no!  Not microtubles again.  Those "long tiny structures found in brain 
> cells"...and in every other cell of your body.  Vic Stenger used to quip, 
> "Well that would explain how men can think with their balls."
>
> Brent
>

 Akhenaten had a clue. AG

>
>
> On 4/19/2022 8:36 AM, John Clark wrote:
>
> Experiments on how anaesthetics alter the behaviour of tiny structures 
> found in brain cells bolster the controversial idea that quantum effects in 
> the brain might explain consciousness
> read more: 
> https://www.newscientist.com/article/2316408-quantum-experiments-add-weight-to-a-fringe-theory-of-consciousness/
>  
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv2%2B5x0%3D_fBrcaveszCFbnptwfZdmjoFftJyA%2Bbikikv1w%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CAJPayv2%2B5x0%3D_fBrcaveszCFbnptwfZdmjoFftJyA%2Bbikikv1w%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/85edc660-de38-4914-ae08-30398a7de656n%40googlegroups.com.


Re: Quantum experiments add weight to a fringe theory of consciousness

2022-04-19 Thread Brent Meeker
Oh no!  Not microtubles again.  Those "long tiny structures found in 
brain cells"...and in every other cell of your body.  Vic Stenger used 
to quip, "Well that would explain how men can think with their balls."


Brent

On 4/19/2022 8:36 AM, John Clark wrote:
Experiments on how anaesthetics alter the behaviour of tiny structures 
found in brain cells bolster the controversial idea that quantum 
effects in the brain might explain consciousness
read more: 
https://www.newscientist.com/article/2316408-quantum-experiments-add-weight-to-a-fringe-theory-of-consciousness/ 




--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2%2B5x0%3D_fBrcaveszCFbnptwfZdmjoFftJyA%2Bbikikv1w%40mail.gmail.com 
<https://groups.google.com/d/msgid/everything-list/CAJPayv2%2B5x0%3D_fBrcaveszCFbnptwfZdmjoFftJyA%2Bbikikv1w%40mail.gmail.com?utm_medium=email_source=footer>.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/d5882d8f-3184-3475-6204-9b30206ce1e1%40gmail.com.


Re: Quantum experiments add weight to a fringe theory of consciousness

2022-04-19 Thread Giulio Prisco
On 2022. Apr 19., Tue at 17:37, John Clark  wrote:

> Experiments on how anaesthetics alter the behaviour of tiny structures
> found in brain cells bolster the controversial idea that quantum effects in
> the brain might explain consciousness
> read more:
> https://www.newscientist.com/article/2316408-quantum-experiments-add-weight-to-a-fringe-theory-of-consciousness/
>

Interesting. This is (I guess) the same article unpaywalled:
https://www.scientiststudy.com/2022/04/quantum-experiments-add-weight-to.html


<https://www.newscientist.com/article/2316408-quantum-experiments-add-weight-to-a-fringe-theory-of-consciousness/>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2%2B5x0%3D_fBrcaveszCFbnptwfZdmjoFftJyA%2Bbikikv1w%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv2%2B5x0%3D_fBrcaveszCFbnptwfZdmjoFftJyA%2Bbikikv1w%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAKTCJycCB1-v2gcqdFviJxv%3DPVdcEv2Bu-CvnvdRRLwaNuO52A%40mail.gmail.com.


Quantum experiments add weight to a fringe theory of consciousness

2022-04-19 Thread John Clark
Experiments on how anaesthetics alter the behaviour of tiny structures
found in brain cells bolster the controversial idea that quantum effects in
the brain might explain consciousness
read more:
https://www.newscientist.com/article/2316408-quantum-experiments-add-weight-to-a-fringe-theory-of-consciousness/

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2%2B5x0%3D_fBrcaveszCFbnptwfZdmjoFftJyA%2Bbikikv1w%40mail.gmail.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-07-13 Thread Lawrence Crowell
There is no reason consciousness is restricted to only one of these three. 
For that matter, self-reference in the Turing machine sense involves 
information. Function is just another way of thinking of an algorithm.

LC

On Friday, June 18, 2021 at 1:46:39 PM UTC-5 Jason wrote:

> In your opinion who has offered the best theory of consciousness to date, 
> or who do you agree with most? Would you say you agree with them 
> wholeheartedly or do you find points if disagreement?
>
> I am seeing several related thoughts commonly expressed, but not sure 
> which one or which combination is right.  For example:
>
> Hofstadter/Marchal: self-reference is key
> Tononi/Tegmark: information is key
> Dennett/Chalmers: function is key
>
> To me all seem potentially valid, and perhaps all three are needed in some 
> combination. I'm curious to hear what other viewpoints exist or if there 
> are other candidates for the "secret sauce" behind consciousness I might 
> have missed.
>
> Jason
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/94f8e4a8-9941-4743-b057-58aac05d66c9n%40googlegroups.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-07-13 Thread Tomas Pales

On Tuesday, July 13, 2021 at 4:39:25 PM UTC+2 Bruno Marchal wrote:

> but I would say that self-reference in the sense of intrinsic identity of 
> an object explains qualitative properties of consciousness (qualia).
>
>
> But what is a object? What is intrinsic identity? And why that would give 
> qualia?
>

I think reality consists of two basic kinds of object: collections and 
properties. Collections are also known as combinations or sets. Properties 
are also known as universals or general/abstract objects. For example, a 
particular table is a collection, but table in general, or table-ness, is a 
property (that is possessed by particular tables). Collections have parts 
while properties have instances. Properties as real objects are 
controversial; many people think they are just words (yet these words 
apparently refer to something in reality). Collections as real objects are 
somewhat controversial too; people might hesitate to regard a collection of 
tables as a real object even though they don't mind regarding a single 
table as a real object despite it being a collection too (of atoms, for 
example).

Collections are rigorously defined in various axiomatizations of set 
theory. All of these axiomatizations refer to real collections as long as 
they are consistent (which may be impossible to prove due to Godel's second 
incompleteness theorem). Pure collections are built up only from 
collections, with empty collections at the bottom (or maybe some 
collections have no bottom, as long as this is consistent). Properties can 
constitute collections too but these would not be pure collections since 
properties are not collections. More general properties have instances in 
less general properties (for example "color" has an instance in "green") 
and ultimately they have instances in collections (for example "green" has 
an instance in a particular green table); instantiation ends in collections 
(for example a particular table is not a property of anything and so it has 
no instances); this is the reason why set theory can represent all 
mathematical properties as collections. All properties are ultimately 
instantiated as collections.

As for intrinsic identity, it is something that an object is in itself, as 
opposed to its relations to other objects. Without the intrinsic identity 
there would be nothing standing in relations, so there would be no 
relations either. Intrinsic identities and extrinsic identities (relations) 
are inseparable. Surely there are relations between relations but 
ultimately relations need to be grounded in intrinsic identities of 
objects. Since qualia are not relations or structures of relations but 
something monolithic, indivisible, unstructured, they might be the 
intrinsic identities. Note that intrinsic identities and relations are 
dependent on each other since they constitute two kinds of identity of the 
same object. That could explain why qualia like colors are mutually 
dependent on relations like wavelengths of photons or neural structures. 

I imagine that every object has two kinds of identity: intrinsic identity 
> (something that the object is in itself) 
>
>
> To be honest, I don’t understand. To be sure, I like mechanism because it 
> provide a clear explanation of where the physical appearance comes from, 
> without having us ti speculate on some “physical” object which would be 
> primary, as we have no evidence for this, and it makes the mind-body 
> problem unsolvable.
>

Numbers are relations. For example number 2 is a relation between 2 
objects. If there were just relations and no intrinsic identities of 
objects then there would be relations between nothings. For example, there 
would be 2 nothings. Which seems absurd.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/d705028b-d8e1-47b3-be44-add6c3bfaf84n%40googlegroups.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-07-13 Thread Bruno Marchal


> On 4 Jul 2021, at 21:17, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> On 7/4/2021 4:46 AM, Bruno Marchal wrote:
>>> On 19 Jun 2021, at 13:17, smitra  wrote:
>>> 
>>> Information is the key.  Conscious agents are defined by precisely that 
>>> information that specifies the content of their consciousness. This means 
>>> that a conscious agent can never be precisely located in some physical 
>>> object, because the information that describes the conscious experience 
>>> will always be less detailed than the information present in the exact 
>>> physical description of an object such a brain. There are always going to 
>>> be a very large self localization ambiguity due to the large number of 
>>> different possible brain states that would generate exactly the same 
>>> conscious experience. So, given whatever conscious experience the agent 
>>> has, the agent could be in a very large number of physically distinct 
>>> states.
>>> 
>>> The simpler the brain and the algorithm implemented by the brain, the 
>>> larger this self-localization ambiguity becomes because smaller algorithms 
>>> contain less detailed information. Our conscious experiences localizes us 
>>> very precisely on an Earth-like planet in a solar system that is very 
>>> similar to the one we think we live in. But the fly walking on the wall of 
>>> the room I'm in right now may have some conscious experience that is 
>>> exactly identical to that of another fly walking on the wall of another 
>>> house in another country 600 years ago or on some rock in a cave 35 million 
>>> year ago.
>>> 
>>> The conscious experience of the fly I see on the all is therefore not 
>>> located in the particular fly I'm observing.
> 
> This seems to equate "a conscious experience" with "an algorithm”. 

Not sure if you ask Saibal or me.

Obviously, it has as much wrong to identify consciousness with a brain than 
with an algorithm. It is the same error, as a brain, its mechanist relevant 
part, is a finite word/program, written in some subset of the physical laws.



> But an algortihm is an extended thing that in general has branches 
> representing counterfactuals.

That’s not an algorithm, but a computations. The counterfactual are the 
differentiating branches of the computations.






> 
>>> This is i.m.o. the key thing you get from identifying consciousness with 
>>> information, it makes the multiverse an essential ingredient of 
>>> consciousness. This resolves paradoxes you get in thought experiments where 
>>> you consider simulating a brain in a virtual world and then argue that 
>>> since the simulation is deterministic, you could replace the actual 
>>> computer doing the computations by a device playing a recording of the 
>>> physical brain states. This argument breaks down if you take into account 
>>> the self-localization ambiguity
> 
> What is this "self" of which you speak?

Again, ask Saibal. I did not wrote the text above. I never use the term 
“information”, because it is confusing, as we use with its first person meaning 
and its third person meaning all the time, and that the whole mind-body problem 
consists in handling all this carefully, taking into account all modes of self 
implied by incompleteness.

The theory is there. It is not know because physicist comes with the right 
question and wrong metaphysics, and logicians comes up with the right 
metaphysics, but wrong question. I am afraid also that the reaction of the 
logicians to Penrose use of Gödel’s theorem (against Mechanism) has deter the 
physicists to even study logic and Gödel’s theorem.

Yet, with mechanism, we get a simple explanation, without any ontological 
commitment except for at least one universal machinery (to get machines, and 
the numbers are enough) of both qualia, quanta, and their mathematical 
relations, but also their necessarily non mathematical relations.

Bruno



> 
> 
> Brent
> 
> 
>>> and consider that this multiverse aspect is an essential part of 
>>> consciousness due to counterfactuals necessary to define the algorithm 
>>> being realized, which is impossible in a deterministic single-world setting.
>> OK. Not only true, but it makes physics into a branch of mathematical logic, 
>> partially embedded in arithmetic  (and totally embedded in the semantic of 
>> arithmetic, which of course cannot be purely arithmetical, as the machine 
>> understand already).
>> 
>> I got the many-dreams, or many histories of the physical reality from the 
>> many computations in arithmetic we

Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-07-13 Thread Bruno Marchal

> On 4 Jul 2021, at 17:40, Tomas Pales  wrote:
> 
> 
> On Friday, June 18, 2021 at 8:46:39 PM UTC+2 Jason wrote:
> In your opinion who has offered the best theory of consciousness to date, or 
> who do you agree with most? Would you say you agree with them wholeheartedly 
> or do you find points if disagreement?
> 
> I am seeing several related thoughts commonly expressed, but not sure which 
> one or which combination is right.  For example:
> 
> Hofstadter/Marchal: self-reference is key
> 
> I don't know if self-reference in the sense of Godel sentences is relevant to 
> consciousness

There are two senses used in computer science, and well captured by the first 
and second recursion theorem, also called fixed point theorem. The second 
recursion theorem is more important and more “intensional”, taking the shape of 
the (relative) code more into account.

The real surprise in that the Gödel-Löbian self-reference, by its clear and 
transparent siplliting along truth (axiomatised by the modal logic G*, which I 
call “the theology of the sound machine”) and proof (axiomatised by G), 
justifies again, like Theaetetus, and unlike Socrates, the modes of truth 
described since Parmenides, Plato, Moderatus of Gades, Plotinus… Damascius.

For example, written in Modal logic, consistency (NOT PROVABLE FALSE) can be 
written ~[]f, or equivalently <>t, and Gödel’s second incompleteness is <>t -> 
~[]<>t, or equivalently <>t -> <>[]f. Löb’s theorem ([]([]p->p)->[]p) 
generalises this, and is the main axiom of both G and G*. G* has all theorems 
of theorem of G, plus []A -> A, but has no necessitation rule (you cannot infer 
[]A from A.

The main point is that G1* (G* + p->[]p, for the mechanist restriction, as I 
have often explained), proves the equivalence of the five modes

p (truth)
[]p (provable, rationally believable)
[]p & p (rationally knowable)

[]p & <>t (observable)
[]p & <>t & p (sensible)

Yet, G, the justifiable part of this by the machine does not prove any of those 
equivalence. The provable obeys to G, the knowable gives a logic of knowledge 
(S4 + a formula by Grzegorcyk), and, as predicted both through tough 
experiment, and Plotinus, the “observable” obeys to a quantum logic, and the 
sensible to a intutionistic quantum logic, which allows to distinguish clearly 
the quanta, as first plural sharable qualia, solving some difficulties in the 
“mind-body” problem.

This theory is justified for anybody accepting a digital physical computer 
brain transplant. 



> but I would say that self-reference in the sense of intrinsic identity of an 
> object explains qualitative properties of consciousness (qualia).

But what is a object? What is intrinsic identity? And why that would give 
qualia?



> I imagine that every object has two kinds of identity: intrinsic identity 
> (something that the object is in itself)

To be honest, I don’t understand. To be sure, I like mechanism because it 
provide a clear explanation of where the physical appearance comes from, 
without having us ti speculate on some “physical” object which would be 
primary, as we have no evidence for this, and it makes the mind-body problem 
unsolvable.

Are you OK if your daughter marry a man who got an artificial digital brain 
after a car accident?



> and extrinsic identity (relations of the object to all other objects). 
> Intrinsic identity is something qualitative (non-relational), a quality that 
> stands in relations to other qualities, so it seems like a natural candidate 
> for the qualitative properties of consciousness.

This brings back essentialism.

Here, you might appreciate that the machine ([]p) is unable to define “[]p & 
p”, except by studying a simpler machine than itself, and then she can lift 
that theology by faith in its own soundness, which she can neither prove, nor 
even express in its language (by results analog to the non definability of 
truth (Tarski-Gödel, Thomason, Montague, ... ).

The qualia appear to be measurable, but non communicable or rationally 
justifiable.

The universal+ machine knows that she has a soul, and she knows that she can 
refute *all* complete theories made on that soul. Se knows already that her 
soul is NOT a machine, nor even anything describable in the third person, 
redoing Heraclite and Brouwer, even Bergson, on that subject. S4Grz is an 
incredible product of G*, a formal theory of something that no machine can 
define or formalise, without invoking a notion of truth, which is indeed a key 
for qualia, and knowledge.




> All relations are instances of the similarity relation (similarities between 
> qualities arising from common and different properties of the qualities), of 
> which a particular kind of relation deserves a special mention: the 
> composition relation, also known as the set membership re

Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-07-06 Thread John Clark
On Sun, Jul 4, 2021 at 7:56 AM Bruno Marchal  wrote:

>> Suppose there is an AI that behaves more intelligently than the most
>> intelligent human who ever lived, however when the machine is opened up to
>> see how this intelligence is actually achieved one consciousness theory
>> doesn't like what it sees and concludes that despite its great intelligence
>> it is not conscious, but a rival consciousness theory does like what it
>> sees and concludes it is conscious. Both theories can't be right although
>> both could be wrong, so how on earth could you ever determine which, if
>> any, of the 2 consciousness theories are correct?
>
>
> *> A consciousness theory has no value if it does not make testable
> prediction.*
>

Truer words were never spoken!

* > But that is the case for the theory of consciousness brought by the
> universal machine/number in arithmetic. They give the logic of the
> observable, and indeed until now that fits with quantum logic.The mechanist
>  brain-mind identity theory would be confirmed if Bohm’s hidden variable
> theory was true, or if we could find an evidence that the physical cosmos
> is unique, or that Newton physics was the only correct theory, etc. *
>

Well, all that is real nice, but it doesn't answer my question. If an AI
was more intelligent than any human who ever lived and you opened it up to
see how it achieved this great intelligence, what would make you conclude
that it was not conscious and what would make you conclude that it was? I'm
a practical man and I'm not interested in vague generalities, if it's not
intelligent behavior then what specific observable should I look for to
determine if something is conscious?

*> With the induction axioms, the machine get Löbian (and consciousnesss
> becomes basically described by Grzegorczyk formula []([](p->[]p) -> p) -> p*
>

Well that's just super, but how do I use that in the real world in a
practical experiment to determine if your theory is correct or not, and
even if it is correct how do I use that to determine if an intelligent
entity is conscious or not ?

John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
q

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv23%3D_ZBa6oOAz8O%2B8vJ03dA0LhA8TJ6zwdwHxTVX-rV3g%40mail.gmail.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-07-04 Thread 'Brent Meeker' via Everything List



On 7/4/2021 4:46 AM, Bruno Marchal wrote:

On 19 Jun 2021, at 13:17, smitra  wrote:

Information is the key.  Conscious agents are defined by precisely that 
information that specifies the content of their consciousness. This means that 
a conscious agent can never be precisely located in some physical object, 
because the information that describes the conscious experience will always be 
less detailed than the information present in the exact physical description of 
an object such a brain. There are always going to be a very large self 
localization ambiguity due to the large number of different possible brain 
states that would generate exactly the same conscious experience. So, given 
whatever conscious experience the agent has, the agent could be in a very large 
number of physically distinct states.

The simpler the brain and the algorithm implemented by the brain, the larger 
this self-localization ambiguity becomes because smaller algorithms contain 
less detailed information. Our conscious experiences localizes us very 
precisely on an Earth-like planet in a solar system that is very similar to the 
one we think we live in. But the fly walking on the wall of the room I'm in 
right now may have some conscious experience that is exactly identical to that 
of another fly walking on the wall of another house in another country 600 
years ago or on some rock in a cave 35 million year ago.

The conscious experience of the fly I see on the all is therefore not located 
in the particular fly I'm observing.


This seems to equate "a conscious experience" with "an algorithm".  But 
an algortihm is an extended thing that in general has branches 
representing counterfactuals.



This is i.m.o. the key thing you get from identifying consciousness with 
information, it makes the multiverse an essential ingredient of consciousness. 
This resolves paradoxes you get in thought experiments where you consider 
simulating a brain in a virtual world and then argue that since the simulation 
is deterministic, you could replace the actual computer doing the computations 
by a device playing a recording of the physical brain states. This argument 
breaks down if you take into account the self-localization ambiguity


What is this "self" of which you speak?


Brent



and consider that this multiverse aspect is an essential part of consciousness 
due to counterfactuals necessary to define the algorithm being realized, which 
is impossible in a deterministic single-world setting.

OK. Not only true, but it makes physics into a branch of mathematical logic, 
partially embedded in arithmetic  (and totally embedded in the semantic of 
arithmetic, which of course cannot be purely arithmetical, as the machine 
understand already).

I got the many-dreams, or many histories of the physical reality from the many 
computations in arithmetic well before I discovered Everett. Until that moment 
I was still thinking that QM was a threat on Mechanism, but of course it is 
only the wave collapse postulate which is contradictory with Mechanism.

We cannot make a computation disappear like we cannot make a number disappear…

Bruno



Saibal


On 18-06-2021 20:46, Jason Resch wrote:

In your opinion who has offered the best theory of consciousness to
date, or who do you agree with most? Would you say you agree with them
wholeheartedly or do you find points if disagreement?
I am seeing several related thoughts commonly expressed, but not sure
which one or which combination is right.  For example:
Hofstadter/Marchal: self-reference is key
Tononi/Tegmark: information is key
Dennett/Chalmers: function is key
To me all seem potentially valid, and perhaps all three are needed in
some combination. I'm curious to hear what other viewpoints exist or
if there are other candidates for the "secret sauce" behind
consciousness I might have missed.
Jason
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com
[1].
Links:
--
[1]
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com?utm_medium=email_source=footer

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/bd53588153f2debae241dbb41e48b60a%40zonnet.nl.


--
You received this message because you are subscribed to the Google Groups 
"Everything List"

Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-07-04 Thread Tomas Pales

On Friday, June 18, 2021 at 8:46:39 PM UTC+2 Jason wrote:

> In your opinion who has offered the best theory of consciousness to date, 
> or who do you agree with most? Would you say you agree with them 
> wholeheartedly or do you find points if disagreement?
>
> I am seeing several related thoughts commonly expressed, but not sure 
> which one or which combination is right.  For example:
>
> Hofstadter/Marchal: self-reference is key
>

I don't know if self-reference in the sense of Godel sentences is relevant 
to consciousness but I would say that self-reference in the sense of 
intrinsic identity of an object explains qualitative properties of 
consciousness (qualia). I imagine that every object has two kinds of 
identity: intrinsic identity (something that the object is in itself) and 
extrinsic identity (relations of the object to all other objects). 
Intrinsic identity is something qualitative (non-relational), a quality 
that stands in relations to other qualities, so it seems like a natural 
candidate for the qualitative properties of consciousness. All relations 
are instances of the similarity relation (similarities between qualities 
arising from common and different properties of the qualities), of which a 
particular kind of relation deserves a special mention: the composition 
relation, also known as the set membership relation in set theory, or the 
relation between a whole and its part (or between a combination of objects 
and an object in the combination), which gives rise to a special kind of 
relational identity of an object: the compositional identity, which is 
constituted by the relations of the object to its parts (in other words, it 
is the internal structure of the object - not to be confused with the 
intrinsic identity of the object, which is a non-structural quality!). Set 
theory describes the compositional identity of all possible composite 
objects down to non-composite objects (instances of the empty set).

Since all objects have an intrinsic identity, this is a panpsychist view 
but it seems important to differentiate between different levels or 
intensities of consciousness.
  

> Tononi/Tegmark: information is key
>

Study of neural correlates of consciousness suggests that the level or 
intensity of consciousness of an object depends on the complexity of the 
object's structure. There are two basic approaches to the definition of 
complexity: "disorganized" complexity (which is high in objects that have 
many different and independent (random) parts) and "organized" complexity 
(which is high in objects that have many different but also dependent 
(integrated) parts). It is the organized complexity in a dynamic form that 
seems important for the level of consciousness. Tononi's integrated 
information theory is based on such organized complexity though I don't 
know if his particular specification of the complexity is correct. 
 

> Dennett/Chalmers: function is key
>

>From the evolutionary perspective it seems important for an organism to be 
able to create internal representations of external objects on different 
levels of composition of reality. Such representations reflect both the 
diversity and regularities of reality and need to be properly integrated to 
have a unified, coordinated influence on the organism's behavior. So the 
organized complexity of the organism's representations seems to be related 
to its functionality. 



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/cdb10702-4479-4089-b0c0-2d145de35efdn%40googlegroups.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-07-04 Thread Bruno Marchal

> On 19 Jun 2021, at 16:02, John Clark  wrote:
> 
> Suppose there is an AI that behaves more intelligently than the most 
> intelligent human who ever lived, however when the machine is opened up to 
> see how this intelligence is actually achieved one consciousness theory 
> doesn't like what it sees and concludes that despite its great intelligence 
> it is not conscious, but a rival consciousness theory does like what it sees 
> and concludes it is conscious. Both theories can't be right although both 
> could be wrong, so how on earth could you ever determine which, if any, of 
> the 2 consciousness theories are correct?


A consciousness theory has no value if it does not make testable prediction. 
But that is the case for the theory of consciousness brought by the universal 
machine/number in arithmetic. They give the logic of the observable, and indeed 
until now that fits with quantum logic.

The mechanist  brain-mind identity theory would be confirmed if Bohm’s hidden 
variable theory was true, or if we could find an evidence that the physical 
cosmos is unique, or that Newton physics was the only correct theory, etc. But 
quantum mechanics saved Mechanism here, and its canonical theory of 
consciousness (defined as a truth that no machine can miss, nor prove, nor 
define without using the notion of truth, immediatey knowable, indubitable, 
etc.). 
Consciousness is “just” a semantical fixed point, invariant for all universal 
machines. Without the induction axioms, that consciousness is highly dissociate 
from any computation, from the machine perspective. With the induction axioms, 
the machine get Löbian (and consciousnesss becomes basically described by 
Grzegorczyk formula 
[]([](p->[]p) -> p) -> p

Bruno



> 
> John K ClarkSee what's on my new list at  Extropolis 
> <https://groups.google.com/g/extropolis>
> qno
> yrm
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv241V2Kw2L%3DsUUGrFrhc8684TGzi%3DRC_yHm-_1rez%2BC_w%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CAJPayv241V2Kw2L%3DsUUGrFrhc8684TGzi%3DRC_yHm-_1rez%2BC_w%40mail.gmail.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/EF619D6E-7EAC-4190-A745-2CA78A65BE4A%40ulb.ac.be.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-07-04 Thread Bruno Marchal


> On 19 Jun 2021, at 13:17, smitra  wrote:
> 
> Information is the key.  Conscious agents are defined by precisely that 
> information that specifies the content of their consciousness. This means 
> that a conscious agent can never be precisely located in some physical 
> object, because the information that describes the conscious experience will 
> always be less detailed than the information present in the exact physical 
> description of an object such a brain. There are always going to be a very 
> large self localization ambiguity due to the large number of different 
> possible brain states that would generate exactly the same conscious 
> experience. So, given whatever conscious experience the agent has, the agent 
> could be in a very large number of physically distinct states.
> 
> The simpler the brain and the algorithm implemented by the brain, the larger 
> this self-localization ambiguity becomes because smaller algorithms contain 
> less detailed information. Our conscious experiences localizes us very 
> precisely on an Earth-like planet in a solar system that is very similar to 
> the one we think we live in. But the fly walking on the wall of the room I'm 
> in right now may have some conscious experience that is exactly identical to 
> that of another fly walking on the wall of another house in another country 
> 600 years ago or on some rock in a cave 35 million year ago.
> 
> The conscious experience of the fly I see on the all is therefore not located 
> in the particular fly I'm observing. This is i.m.o. the key thing you get 
> from identifying consciousness with information, it makes the multiverse an 
> essential ingredient of consciousness. This resolves paradoxes you get in 
> thought experiments where you consider simulating a brain in a virtual world 
> and then argue that since the simulation is deterministic, you could replace 
> the actual computer doing the computations by a device playing a recording of 
> the physical brain states. This argument breaks down if you take into account 
> the self-localization ambiguity and consider that this multiverse aspect is 
> an essential part of consciousness due to counterfactuals necessary to define 
> the algorithm being realized, which is impossible in a deterministic 
> single-world setting.

OK. Not only true, but it makes physics into a branch of mathematical logic, 
partially embedded in arithmetic  (and totally embedded in the semantic of 
arithmetic, which of course cannot be purely arithmetical, as the machine 
understand already).

I got the many-dreams, or many histories of the physical reality from the many 
computations in arithmetic well before I discovered Everett. Until that moment 
I was still thinking that QM was a threat on Mechanism, but of course it is 
only the wave collapse postulate which is contradictory with Mechanism. 

We cannot make a computation disappear like we cannot make a number disappear…

Bruno


> 
> Saibal
> 
> 
> On 18-06-2021 20:46, Jason Resch wrote:
>> In your opinion who has offered the best theory of consciousness to
>> date, or who do you agree with most? Would you say you agree with them
>> wholeheartedly or do you find points if disagreement?
>> I am seeing several related thoughts commonly expressed, but not sure
>> which one or which combination is right.  For example:
>> Hofstadter/Marchal: self-reference is key
>> Tononi/Tegmark: information is key
>> Dennett/Chalmers: function is key
>> To me all seem potentially valid, and perhaps all three are needed in
>> some combination. I'm curious to hear what other viewpoints exist or
>> if there are other candidates for the "secret sauce" behind
>> consciousness I might have missed.
>> Jason
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com
>> [1].
>> Links:
>> --
>> [1]
>> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com?utm_medium=email_source=footer
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid

Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-07-04 Thread Bruno Marchal

> On 19 Jun 2021, at 02:18, 'Brent Meeker' via Everything List 
>  wrote:
> 
> I'm most with Dennett.  I see consciousness as having several different 
> levels, which are also different levels of self-reference.  

Different modes, yes (“level” is already used to describe the Doctor(s coding 
description of my brain).

The 8 main modes are given by 

p
[]p (which gives two modes as they split on proof/truth)
[]p & p
[]p & <>t (idem)
[]p & <>t & p (idem)

P is for any partial computable proposition (sigma_1)
[]p is for Gödel’s beweisbar predicate.<>p abbreviates ~[]~p.



> At the lowest level even bacteria recognize (in the functional/operational 
> sense) a distinction between "me" and "everything else".  A little above 
> that, some that are motile also sense chemical gradients and can move toward 
> food.  So they distinguish "better else" from "worse else".  At a higher 
> level, animals and plants with sensors know more about their surroundings.  
> Animals know a certain amount of geometry and are aware of their place in the 
> world.  How close or far things are.  Some animals, mostly those with eyes, 
> employ foresight and planning in which they forsee outcomes for themselves.  
> They can think of themselves in relation to other animals.  More advanced 
> social animals are aware of their social status.  Humans, perhaps thru the 
> medium of language, have a theory of mind, i.e. they can think about what 
> other people think and attribute agency to them (and to other things) as part 
> of their planning.  The conscious part of all this awareness is essentially 
> that which is processed as language and image; ultimately only a small part.

All universal machine believing in enough induction axiom can reason as fully 
as logically possible about themselves, and they all converge toward the same 
theology, as far as they remain arithmetically sound. The virtual body (third 
person self-reference) propositional logics are given given by G1 and G1*, the 
soul (the one conscious) is given by S4Grz1, the immediate sensation’s logic 
(qualia) is given by Z1* (the true components of the logic of []p & <>t & p.

Now I do think that much more animal have that self-consciousness level, but 
can hardly told us as they lack the language. Of course here I am speculating.

Bruno



> 
> Brent
> 
> On 6/18/2021 11:46 AM, Jason Resch wrote:
>> In your opinion who has offered the best theory of consciousness to date, or 
>> who do you agree with most? Would you say you agree with them wholeheartedly 
>> or do you find points if disagreement?
>> 
>> I am seeing several related thoughts commonly expressed, but not sure which 
>> one or which combination is right.  For example:
>> 
>> Hofstadter/Marchal: self-reference is key
>> Tononi/Tegmark: information is key
>> Dennett/Chalmers: function is key
>> 
>> To me all seem potentially valid, and perhaps all three are needed in some 
>> combination. I'm curious to hear what other viewpoints exist or if there are 
>> other candidates for the "secret sauce" behind consciousness I might have 
>> missed.
>> 
>> Jason
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com 
>> <mailto:everything-list+unsubscr...@googlegroups.com>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com
>>  
>> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com?utm_medium=email_source=footer>.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/cdc372c6-2579-fa64-2a26-bf8c1ce33c56%40verizon.net
>  
> <https://groups.google.com/d/msgid/everything-list/cdc372c6-2579-fa64-2a26-bf8c1ce33c56%40verizon.net?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/B559164C-7EC5-4465-93F6-84F3A6DA7F46%40ulb.ac.be.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-07-04 Thread Bruno Marchal

> On 18 Jun 2021, at 20:46, Jason Resch  wrote:
> 
> In your opinion who has offered the best theory of consciousness to date, or 
> who do you agree with most? Would you say you agree with them wholeheartedly 
> or do you find points if disagreement?
> 
> I am seeing several related thoughts commonly expressed, but not sure which 
> one or which combination is right.  For example:
> 
> Hofstadter/Marchal: self-reference is key

Hofstadter is very good including on Gödel, which is rare for a phsyicist (cf 
Penrose!).

But Hofstadter is still remains in the Aristotelian theology/metaphysics. He 
miss the fact that all computation are realised in arithmetic.

You can see the arithmetical reality as a combinatory algebra (using n * m = 
phi_n(m)).

If o is computable, and ô  is its code,  the standard model of arithmetic N 
satisfies 

Er(T(ô, x, r) & U(r)), 

with T being Kleene’s predicate, and U the result-extracting function, which 
extract the result from the code of the computation r.
See Davis ‘computability and unsolvability’ chapter 4 for a purely arithmetical 
definition of T.

With this in mind, the burden of the proof is the hand of those who add some 
ontological commitment to elementary arithmetic. They have to abandon 
Mechanism, (and thus Darwin & Co.) or explain how a Reality (be it a god or a 
universe) can make some computations more real than other for the universal 
machine emulated by those computations. But with Mechanism, that is impossible 
without adding something non Turing emulable in the processing of the mind.




> Tononi/Tegmark: information is key
> Dennett/Chalmers: function is key

With mechanism information is the key too, but “information” is like 
“infinite”: a very fuzzy complex notion, made even more complicated by the 
discovery of a physical notion of information (quantum information). With 
Mechanism, anything physical (and thus quantum information) must be derived 
from the first person plural appearance lived by the universal number in 
arithmetic. Then the mathematics of self-reference does exactly that, and 
indeed the observable enforces an arithmetical interpretation of quantum logic 
and physics. Mechanism (the simplest hypothesis in cognitive science by 
default) is not yet refuted.
Here Tegmark has the correct mathematicalist position, but fail to take into 
account the laws of machine self-reference to derive physics.
Tononi, Chalmers, Dennett remains also trapped in the materialist framework, 
but we cannot have both Mechanism and Materialism together, as they are 
logically contradictory (up to some technical nuances I don’t want bother 
people with here).



> 
> To me all seem potentially valid,

It would be valid, if it was made clear that to solve the mind-body problem 
(the consciousness-matter problem) we have to derive the physical laws from the 
statistic on all computations in arithmetic. 

This works as the first evidences are that the physical reality described well 
the many-worlds interpretation of elementary arithmetic (as seen from the 
universal number personal perspective, given by the intensional variant of 
Gödel’s provability predicate, which is a sort of logical (assertative, true or 
false) equivalent to Kleene’s predicate.

Hofstadter and Dennett get very close to the correct theology in their bools 
“Mind’s I”, especially Dennett where we can find the text where he missed the 
first person indeterminacy explicitly.

Bruno



> and perhaps all three are needed in some combination. I'm curious to hear 
> what other viewpoints exist or if there are other candidates for the "secret 
> sauce" behind consciousness I might have missed.
> 
> Jason
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/A71188DE-0412-4118-ACE5-5C1E0E8FD47A%40ulb.ac.be.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-21 Thread John Clark
On Sun, Jun 20, 2021 at 6:51 PM Jason Resch  wrote:

> *Anything we can identify as having universal utility or describe as a
> universal goal we can use to predict the long term direction of technology,
> even if humans are no longer the drivers of it.*
>

Goals are always in a constant state of flux with no fixed hierarchy, I
don't think there is such a thing as a universal goal, not even an
immutable goal for self-preservation.


> *> Even a paperclip maximizer will have the meta goal of increasing its
> knowledge, during which time it may learn to escape its programming, just
> as the human brain may transcended its biological programming when it
> chooses to upload into a computer and ditch it's genes.*
>

Thanks to our brain humans long ago learned how to transcend their
biological programming, if they hadn't they never would've invented the
condom.

*> If I demonstrate knowledge to you, by responding to my environment, or
> by telling you about my thoughts, etc., could I do any of those things
> without knowing the state of my environment or my mind?*
>

On my Mac I just asked Siri if she was happy, she said that she was and
added that she was always happy to talk to me and inquired if I was also
happy. Is Siri conscious? I don't know, maybe, but I'm far more interested
in figuring out just how intelligent she is.

*> Stathis mentions Chalmers's fading/dancing qualia as a reductio ad
> absurdum. Are you familiar with his argument? If so, do you think it
> succeeds?*
>

I think it demonstrates if X is conscious and Y is functionally equivalent
to X then it would be ridiculously improbable to argue that Y is not also
conscious, but no more ridiculously improbable then arguing that the only
way God could forgive humanity for eating an apple was to get humanity to
torture his son to death, and if you don't believe every word of that then
an all loving God will use all of his infinite power to torture you most
horribly for all of eternity.  Both ideas are improbable but not logically
impossible.

>
* > I would call your hypothesis that "intelligence implies consciousness"
> a theory that could be proved or disproved,*
>

I don't have a clue how that could ever be done even in theory, much less in
practice, and that's why I don't have much interest in consciousness.

*> AIXI is a good theory of universal and perfect intelligence. It's just
> not practical because it takes exponential time to compute. The tricks lie
> in finding shortcuts that give approximate results to AIXI but can be
> computed in reasonable time. (The inventor of AIXI now works at DeepMind.)
> Neural networks are known to be universal in terms of being able to learn
> any mapping function. There are probably discoveries to be made in terms of
> improving learning efficiency, but we already have systems that learn to
> play chess, poker, and go better than any human in less than a week, so
> maybe the only thing missing is massive computational resources.
> Researchers seem to have demonstrated this in their leap from GPT-2 to
> GPT-3. GPT-3 can write text that is nearly indistinguishable from text
> written by humans. It's even learned to write code and do math, despite not
> being trained to do so.*
>

I don't dispute any of that, but it all involves intelligence not
consciousness.

>> If one consciousness theory says you were conscious and a rival theory
>> says you were not there is no way to tell which one was right.
>>
>
> *>That's why we make theories, so we can test them*
>

When you test for anything, not just for consciousness, you must make an
 observation, and we can observe things like billiard balls and we can
observe what those billiard balls do, such as move with a certain speed and
acceleration, and we can observe the type of electromagnetic waves they
reflect with their color, but if billiard balls have qualia we can't
observe them nor can we observe anything else's qualia except for our own,
and I don't see how that fact will ever change.
John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
lde3

vgj

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2N34AcXSjwp8uPD0MJtWJjfc1-Ct%3D%3Dm%2BW0ggvvG%3DPR-g%40mail.gmail.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-20 Thread spudboy100 via Everything List
For sure Jason, and discovering conscious agents in the cosmos may take a 
rather significant research grant, me thinks.


-Original Message-
From: Jason Resch 
To: Everything List 
Sent: Sun, Jun 20, 2021 5:49 pm
Subject: Re: Which philosopher or neuro/AI scientist has the best theory of 
consciousness?



On Sat, Jun 19, 2021, 12:20 PM spudboy100 via Everything List 
 wrote:

I agree with Saibal on this and welcome his great explanation. Not to miss out 
on not giving credit where credit is due, let me invoke Donald Hoffman as their 
chief proponent of conscious agents. Or, the best 
known.http://cogsci.uci.edu/~ddhoff/Chapter17Hoffman.pdf


Thanks for sharing it was an interesting read. I thought his "interface" 
description of our experiences was insightful, and I liked his simplification 
of consciousness agents. I'm not sure however that I agreed with his theorem 
that purports to prove inverted qualia. I'll have to read more on that.
Jason




-Original Message-
From: smitra 
To: everything-list@googlegroups.com
Sent: Sat, Jun 19, 2021 7:17 am
Subject: Re: Which philosopher or neuro/AI scientist has the best theory of 
consciousness?

Information is the key.  Conscious agents are defined by precisely that 
information that specifies the content of their consciousness. This 
means that a conscious agent can never be precisely located in some 
physical object, because the information that describes the conscious 
experience will always be less detailed than the information present in 
the exact physical description of an object such a brain. There are 
always going to be a very large self localization ambiguity due to the 
large number of different possible brain states that would generate 
exactly the same conscious experience. So, given whatever conscious 
experience the agent has, the agent could be in a very large number of 
physically distinct states.

The simpler the brain and the algorithm implemented by the brain, the 
larger this self-localization ambiguity becomes because smaller 
algorithms contain less detailed information. Our conscious experiences 
localizes us very precisely on an Earth-like planet in a solar system 
that is very similar to the one we think we live in. But the fly walking 
on the wall of the room I'm in right now may have some conscious 
experience that is exactly identical to that of another fly walking on 
the wall of another house in another country 600 years ago or on some 
rock in a cave 35 million year ago.

The conscious experience of the fly I see on the all is therefore not 
located in the particular fly I'm observing. This is i.m.o. the key 
thing you get from identifying consciousness with information, it makes 
the multiverse an essential ingredient of consciousness. This resolves 
paradoxes you get in thought experiments where you consider simulating a 
brain in a virtual world and then argue that since the simulation is 
deterministic, you could replace the actual computer doing the 
computations by a device playing a recording of the physical brain 
states. This argument breaks down if you take into account the 
self-localization ambiguity and consider that this multiverse aspect is 
an essential part of consciousness due to counterfactuals necessary to 
define the algorithm being realized, which is impossible in a 
deterministic single-world setting.

Saibal


On 18-06-2021 20:46, Jason Resch wrote:
> In your opinion who has offered the best theory of consciousness to
> date, or who do you agree with most? Would you say you agree with them
> wholeheartedly or do you find points if disagreement?
> 
> I am seeing several related thoughts commonly expressed, but not sure
> which one or which combination is right.  For example:
> 
> Hofstadter/Marchal: self-reference is key
> Tononi/Tegmark: information is key
> Dennett/Chalmers: function is key
> 
> To me all seem potentially valid, and perhaps all three are needed in
> some combination. I'm curious to hear what other viewpoints exist or
> if there are other candidates for the "secret sauce" behind
> consciousness I might have missed.
> 
> Jason
> 
>  --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com
> [1].
> 
> 
> Links:
> --
> [1]
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com?utm_medium=email_source=footer

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
T

Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-20 Thread 'Brent Meeker' via Everything List




On 6/20/2021 3:51 PM, Jason Resch wrote:
It's not impossible if there are universal goals. Even a paperclip 
maximizer will have the meta goal of increasing its knowledge, during 
which time it may learn to escape its programming, just as the human 
brain may transcended its biological programming when it chooses to 
upload into a computer and ditch it's genes.


But it's possible that there are no universal goals.   There are 
certainly humans who do not value increasing knowledge.  There are also 
humans who do not value sex or reproduction, so they are in effect 
defective products of evolution.  If a human decided to ditch its genes 
it would have to make that decision based on satisfying some values 
which it already held.


Brent
Not necessity, not desire - no, the love of power is the demon of men. 
Let them have everything - health, food, a place to live, entertainment 
- they are and remain unhappy and low-spirited: for the demon waits and 
waits and will be satisfied.

   --- Friedrich Nietzsche

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/775aeb04-9fc7-8a10-1bc3-08c33023a8b6%40verizon.net.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-20 Thread 'Brent Meeker' via Everything List



-Original Message-
From: smitra 
To: everything-list@googlegroups.com
Sent: Sat, Jun 19, 2021 7:17 am
Subject: Re: Which philosopher or neuro/AI scientist has the best theory 
of consciousness?

This resolves
paradoxes you get in thought experiments where you consider simulating a
brain in a virtual world and then argue that since the simulation is
deterministic, you could replace the actual computer doing the
computations by a device playing a recording of the physical brain
states. This argument breaks down if you take into account the
self-localization ambiguity and consider that this multiverse aspect is
an essential part of consciousness due to counterfactuals necessary to
define the algorithm being realized, which is impossible in a
deterministic single-world setting.


But it's not a paradox in a probabilistic single-world.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/062baf56-ba7e-396a-fef1-2d2a4a504d2d%40verizon.net.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-20 Thread Jason Resch
Thanks, I had heard the phenomenon described before. Poincare gives
probably the best description of it that I've seen.

Jason

On Sat, Jun 19, 2021, 4:47 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> Sorry.  I thought Poincare' effect was a common term, but apparently not.
> Here's his description starting about half way thru this essay
>
> http://vigeland.caltech.edu/ist4/lectures/Poincare%20Reflections.pdf
>
> Brent
>
> On 6/19/2021 7:52 AM, Jason Resch wrote:
>
>
>
> On Fri, Jun 18, 2021, 8:59 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 6/18/2021 5:16 PM, Jason Resch wrote:
>> >
>> > - Is consciousness inherent to any intelligent process?
>> >
>> > I think the answer is yes, what do you think?
>> >
>> Not just any intelligent process.  But any at human (or even dog)
>> level.  I think human level consciousness depends on language or similar
>> representation in which the entity thinks about decisions by internally
>> modelling situations included itself.  Think of how much intelligence
>> humans bring to bear unconsciously.  Think of the Poincare' effect.
>>
>
>
> Thanks Brent, I appreciate your answers. But I did not follow what you say
> here regarding the Poincare effect. I did a search on it and nothing stood
> out as related to the brain.
>
> Jason
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi0bO%3DkUVdZ-zRArhC8NNfguTKCnQR6m%3Db%3DRxG9dkmSww%40mail.gmail.com
> 
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/c0417025-eb22-cd1b-fc18-1efdfd7a97c4%40verizon.net
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjSPA%3D44%3DHeMz9fuYDQUz%3Da%3DzBF06e9GLKfji-8nD%3DjyQ%40mail.gmail.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-20 Thread Jason Resch
is consciousness, and another entity is
>> organizationally and functionally equivalent, preserving all the parts and
>> relationships among its parts, then that second entity must be equivalently
>> conscious to the first.*
>>
>
> Personally I think that principle sounds pretty reasonable, but I can't
> prove it's true and never will be able to.
>

Stathis mentions Chalmers's fading/dancing qualia as a reductio ad
absurdum. Are you familiar with his argument? If so, do you think it
succeeds?



>
>> >> I know I can suffer, can you?
>>
>>
>> *>I can tell you that I can.*
>>
>
> So now I know you could generate the ASCII sequence "*I can tell you that
> I can*", but that doesn't answer my question, can you suffer? I don't
> even know if you and I mean the same thing by the word "suffer".
>
>
>> *> You could verify via functional brain scans that I wasn't
>> preprogrammed like an Eliza bot to say I can. You could trace the neural
>> firings in my brain to uncover the origin of my belief that I can suffer,
>> and I could do the same for you.*
>>
>
> No I cannot. Theoretically I could trace the neural firings in your brain
> and figure out how they stimulated the muscles in your hand to type out "*I
> can tell you that I can*"  but that's all I can do. I can't see suffering
> or unhappiness on an MRI scan, although I may be able to trace the nerve
> impulses that stimulate your tear glands to become more active.
>

I think with sufficient analysis you could find functional modules that
have capacities for all the properties you associate with suffering:
avoidance behaviors, stress, recruiting more parts of the brain/resources
to find ways to escape the suffering, etc.


> *> Could a zombie write a book like Chalmers's "The Consciousness Mind"?*
>>
>
> I don't think so because it takes intelligence to write a book and my
> axiom is that consciousness is the inevitable byproduct of intelligence. I
> can give reasons why I think the axiom is reasonable and probably true
> but it falls short of a proof, that's why it's an axiom.
>

Nothing is ever proved in science or in math. But setting something as an
axiom when it could be a theorem should be avoided when possible. I would
call your hypothesis that "intelligence implies consciousness" a theory
that could be proved or disproved, but it might require a tighter
definition of what is meant by intelligence and consciousness.

In the "agent-environment interaction" definition of intelligence,
perceptions are a requirenent for intelligent behavior.


>
>>
>> *> Some have proposed writing philosophical texts on the philosophy of
>> mind as a kind of super-Turing test for establishing consciousness.*
>>
>
> I think you could do much better than that because it only takes a minimal
> amount of intelligence to dream up a new consciousness theory, they're a
> dime a dozen, any one of them is as good, or as bad, as another. Good
> intelligence theories on the other hand are hard as hell to come up with
> but if you do find one you're likely to become the world's first
> trillionaire.
>


AIXI is a good theory of universal and perfect intelligence. It's just not
practical because it takes exponential time to compute. The tricks lie in
finding shortcuts that give approximate results to AIXI but can be computed
in reasonable time. (The inventor of AIXI now works at DeepMind.)

Neural networks are known to be universal in terms of being able to learn
any mapping function. There are probably discoveries to be made in terms of
improving learning efficiency, but we already have systems that learn to
play chess, poker, and go better than any human in less than a week, so
maybe the only thing missing is massive computational resources.
Researchers seem to have demonstrated this in their leap from GPT-2 to
GPT-3. GPT-3 can write text that is nearly indistinguishable from text
written by humans. It's even learned to write code and do math, despite not
being trained to do so.


> *Wouldn't you prefer the anesthetic that knocks you out vs. the one that
>> only blocks memory formation? Wouldn't a theory of consciousness be
>> valuable here to establish which is which?*
>>
>
> Such a theory would be utterly useless because there would be no way to
> tell if it was correct.
>

Why not? This appears to be an unsupported assumption.

If one consciousness theory says you were conscious and a rival theory says
> you were not there is no way to tell which one was right.
>

That's why we make theories, so we can test them where they make different
predictions with the hopes of ruling one or more incorrect theories out.
Not all predictions of a theory will be tes

Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-20 Thread Jason Resch
On Sat, Jun 19, 2021, 12:20 PM spudboy100 via Everything List <
everything-list@googlegroups.com> wrote:

> I agree with Saibal on this and welcome his great explanation. Not to miss
> out on not giving credit where credit is due, let me invoke Donald Hoffman
> as their chief proponent of conscious agents. Or, the best known.
> http://cogsci.uci.edu/~ddhoff/Chapter17Hoffman.pdf
>


Thanks for sharing it was an interesting read. I thought his "interface"
description of our experiences was insightful, and I liked his
simplification of consciousness agents. I'm not sure however that I agreed
with his theorem that purports to prove inverted qualia. I'll have to read
more on that.

Jason


>
>
> -Original Message-
> From: smitra 
> To: everything-list@googlegroups.com
> Sent: Sat, Jun 19, 2021 7:17 am
> Subject: Re: Which philosopher or neuro/AI scientist has the best theory
> of consciousness?
>
> Information is the key.  Conscious agents are defined by precisely that
> information that specifies the content of their consciousness. This
> means that a conscious agent can never be precisely located in some
> physical object, because the information that describes the conscious
> experience will always be less detailed than the information present in
> the exact physical description of an object such a brain. There are
> always going to be a very large self localization ambiguity due to the
> large number of different possible brain states that would generate
> exactly the same conscious experience. So, given whatever conscious
> experience the agent has, the agent could be in a very large number of
> physically distinct states.
>
> The simpler the brain and the algorithm implemented by the brain, the
> larger this self-localization ambiguity becomes because smaller
> algorithms contain less detailed information. Our conscious experiences
> localizes us very precisely on an Earth-like planet in a solar system
> that is very similar to the one we think we live in. But the fly walking
> on the wall of the room I'm in right now may have some conscious
> experience that is exactly identical to that of another fly walking on
> the wall of another house in another country 600 years ago or on some
> rock in a cave 35 million year ago.
>
> The conscious experience of the fly I see on the all is therefore not
> located in the particular fly I'm observing. This is i.m.o. the key
> thing you get from identifying consciousness with information, it makes
> the multiverse an essential ingredient of consciousness. This resolves
> paradoxes you get in thought experiments where you consider simulating a
> brain in a virtual world and then argue that since the simulation is
> deterministic, you could replace the actual computer doing the
> computations by a device playing a recording of the physical brain
> states. This argument breaks down if you take into account the
> self-localization ambiguity and consider that this multiverse aspect is
> an essential part of consciousness due to counterfactuals necessary to
> define the algorithm being realized, which is impossible in a
> deterministic single-world setting.
>
> Saibal
>
>
> On 18-06-2021 20:46, Jason Resch wrote:
> > In your opinion who has offered the best theory of consciousness to
> > date, or who do you agree with most? Would you say you agree with them
> > wholeheartedly or do you find points if disagreement?
> >
> > I am seeing several related thoughts commonly expressed, but not sure
> > which one or which combination is right.  For example:
> >
> > Hofstadter/Marchal: self-reference is key
> > Tononi/Tegmark: information is key
> > Dennett/Chalmers: function is key
> >
> > To me all seem potentially valid, and perhaps all three are needed in
> > some combination. I'm curious to hear what other viewpoints exist or
> > if there are other candidates for the "secret sauce" behind
> > consciousness I might have missed.
> >
> > Jason
> >
> >  --
> > You received this message because you are subscribed to the Google
> > Groups "Everything List" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> > an email to everything-list+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com
> > [1].
> >
> >
> > Links:
> > --
> > [1]
> >
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com?utm_medium=email_source=footer
>
&

Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-20 Thread John Clark
On Sun, Jun 20, 2021 at 2:28 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

 >> If so then consciousness is the inevitable byproduct of intelligent
>> behavior.
>
>
> * > Yes, I agree with that.  But I don't think either intelligence or
> consciousness are all-or-nothing attributes.  I think consciousness occurs
> at different levels which correspond to it's different uses as a tool of
> intelligence.*
>

We agree on that also.
John K ClarkSee what's on my new list at  Extropolis

wvg4

mk9

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1DhYn0_qcXXh702grEvMSV1pSqCpHc7by%3D0mL1JZW07A%40mail.gmail.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-20 Thread 'Brent Meeker' via Everything List



On 6/20/2021 2:43 AM, John Clark wrote:
On Sat, Jun 19, 2021 at 8:46 PM 'Brent Meeker' via Everything List 
> wrote:


/> Certain values are built in by evolution, values related to
reproducing mostly/


Humans don't have a fixed hierarchical goal structure, our values are 
in a constant state of flux, even the value we place on 
self-preservation.  Any intelligent being would have to be the same 
way because it may turn out that the goal we want is impossible to 
achieve so a new goal will have to be found, and Alan Turing proved 
that in general there is no way to tell beforehand if a goal is 
achievable or not.


Yes, part of being intelligent must be to shuffle long term and short 
term goals.  But that doesn't affect my point that intelligence requires 
that there be some fundamental values that are not merely instrumental 
and are categorically different from intelligence.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/e1db2266-d7ed-4184-efe7-f4cb5743d2d8%40verizon.net.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-20 Thread 'Brent Meeker' via Everything List



On 6/20/2021 2:26 AM, John Clark wrote:
On Sat, Jun 19, 2021 at 7:17 PM 'Brent Meeker' via Everything List 
> wrote:


/> This depends on how we define consciousness.  If it means
imagining and using simulations in which you represent yourself in
order to plan your actions then maybe natural selection can "see"
it.  People who can't or don't plan by imagining themselves in
various prospective scenarios and who don't have a theory of mind
regarding other people are probably less successful at reproducing.
/


 If so then consciousness is the inevitable byproduct of intelligent 
behavior.


Yes, I agree with that.  But I don't think either intelligence or 
consciousness are all-or-nothing attributes.  I think consciousness 
occurs at different levels which correspond to it's different uses as a 
tool of intelligence.


Brent



John K Clark    See what's on my new list at Extropolis 



mk9




--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2%2BAO%3DNtB%2BpmmshALX4tzbp5TXq4MHt_fEyiashppxkug%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/69fc19f7-67d5-772f-ea46-ab59dbe3ce95%40verizon.net.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-20 Thread John Clark
On Sat, Jun 19, 2021 at 8:46 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

*> Certain values are built in by evolution, values related to reproducing
> mostly*


Humans don't have a fixed hierarchical goal structure, our values are in a
constant state of flux, even the value we place on self-preservation.  Any
intelligent being would have to be the same way because it may turn out
that the goal we want is impossible to achieve so a new goal will have to
be found, and Alan Turing proved that in general there is no way to tell
beforehand if a goal is achievable or not.

John K ClarkSee what's on my new list at  Extropolis

md6

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2bo5h39uaTvs7Es3m8Z1Z8JcZXQ6OtqNPJJLhUqmBq9w%40mail.gmail.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-20 Thread John Clark
On Sat, Jun 19, 2021 at 7:17 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:


> *> This depends on how we define consciousness.  If it means imagining and
> using simulations in which you represent yourself in order to plan your
> actions then maybe natural selection can "see" it.  People who can't or
> don't plan by imagining themselves in various prospective scenarios and who
> don't have a theory of mind regarding other people are probably less
> successful at reproducing.*
>


 If so then consciousness is the inevitable byproduct of intelligent
behavior.

John K ClarkSee what's on my new list at  Extropolis



mk9

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2%2BAO%3DNtB%2BpmmshALX4tzbp5TXq4MHt_fEyiashppxkug%40mail.gmail.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread 'Brent Meeker' via Everything List



On 6/19/2021 4:12 PM, Stathis Papaioannou wrote:


/> For example, wee could rule out many theories and narrow
down on those that accept "organizational invariance" as
Chalmers defines it. This is the principle that if one entity
is consciousness, and another entity is organizationally and
functionally equivalent, preserving all the parts and
relationships among its parts, then that second entity must be
equivalently conscious to the first./


Personally I think that principle sounds pretty reasonable, but I
can't prove it's true and never will be able to.


Chalmers presents a proof of this in the form of a reductio ad absurdum.


But that's not very helpful since it leaves open that many other systems 
that are not functionally and organizationally equivalent may also be 
conscious.  Computers are not functionally and organizationally 
equivalent to people.  In fact I can't think of anything that is.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/d9d08792-d8d0-3672-3a0e-fffa22249f83%40verizon.net.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread 'Brent Meeker' via Everything List



On 6/19/2021 3:20 PM, John Clark wrote:
On Sat, Jun 19, 2021 at 5:57 PM 'Brent Meeker' via Everything List 
> wrote:


> /What I think is missing in the JKC's idea that intelligence
is//interesting and understandable but consciousness isn't, is
that he//leaves out values.  Intelligence is define in terms of
achieving goals. /[...] /the//part we commonly call 'wisdom' when
it works out) is how conflictingvalues are resolved./


But there is nothing unique in the human ability to do that, computers 
do that sort of thing all the time. Often there are two values that 
affect the rate of a process, one increases the rate and the other 
decreases it, the relationship between the two can be quite complex so 
it's not at all obvious which will predominate and what the 
ultima fate of the process will be, but a computer can calculate it.


I didn't say it was something only a human does.  I just pointed out 
that it is more (or less) than just intelligence.  It's like consulting 
an oracle.  The oracle may be very intelligent and able to tell you how 
to accomplish anything, but be of no help at all in making a decision if 
you don't know what value to place on outcomes.  Certain values are 
built in by evolution, values related to reproducing mostly.  There's 
nothing "intelligent" about having them, but intelligence has no 
function without values.


Brent

John K Clark    See what's on my new list at Extropolis 



kdf


--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3ESeioZj53B8WGsQBhLfowRvC2MQyF0Qcaue7d4fx%2BoQ%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/33863f99-5199-e97a-b62c-1e4061d1693d%40verizon.net.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread 'Brent Meeker' via Everything List



On 6/19/2021 12:48 PM, John Clark wrote:
I know Darwinian Evolution produced me and I know for a fact that I am 
conscious, but Natural Selection can't see consciousness any better 
than we can directly see consciousness in other people,


This depends on how we define consciousness.  If it means imagining and 
using simulations in which you represent yourself in order to plan your 
actions then maybe natural selection can "see" it.  People who can't or 
don't plan by imagining themselves in various prospective scenarios and 
who don't have a theory of mind regarding other people are probably less 
successful at reproducing.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/924eba1b-5a75-d528-2d99-e39b82ddcb3e%40verizon.net.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread 'Brent Meeker' via Everything List



On 6/19/2021 12:48 PM, John Clark wrote:


Of my own free will, I consciously decide to go to a restaurant.
/Why? /
Because I want to.
/Why ? /
Because I want to eat.
/Why?/
Because I'm hungry?
/Why ?/
Because lack of food triggered nerve impulses  in my stomach , my 
braininterpreted these signals as pain, I can only stand so much 
before I try to

stop it.
/Why?/
Because I don't like pain.
/Why? /
Because that's the way my brain is constructed.
/Why?/
Because my body  and the hardware of my brain were made from the 
informationin my genetic code  (lets see, 6 billion base pairs 2 bits 
per base pair
8 bits per byte that comes out to about 1.5 gig, )  the programming of 
my brain came from theenvironment, add a little quantum randomness 
perhaps and of my own free willI consciously decide to go to a restaurant.


And if my ancestors had not evolved this programming they would have 
died of starvation and I wouldn't exist.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/bf0a-2a44-8e25-2ae3-1f174156aa7a%40verizon.net.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread Stathis Papaioannou
On Sun, 20 Jun 2021 at 05:48, John Clark  wrote:

> On Sat, Jun 19, 2021 at 11:36 AM Jason Resch  wrote:
>
> >> I'm enormously impressed with Deepmind and I'm an optimist regarding
>>> AI, but I'm not quite that optimistic.
>>>
>>
>> *>Are you familiar with their Agent 57? -- a single algorithm that
>> mastered all 57 Atari games at a super human level, with no outside
>> direction, no specification of the rules, and whose only input was the "TV
>> screen" of the game.*
>>
>
> As I've said, that is very impressive, but even more impressive would be
> winning a Nobel prize, or even just being able to diagnose that the problem
> with your old car is a broken fan belt, and be able to remove the bad
> belt and install a good one, but we're not quite there yet.
>
> *> Also, because of chaos, predicting the future to any degree of accuracy
>> requires exponentially more information about the system for each finite
>> amount of additional time to simulate, and this does not even factor in
>> quantum uncertainty,*
>>
>
> And yet many times humans can make predictions that turn out to be better
> than random guessing, and a computer should be able to do at least as good,
> and I'm certain they will eventually.
>
> >  Being unable to predict the future isn't a good definition of the
>> singularity, because we already can't.
>>
>
> Not true, often we can make very good predictions, but that will be
> impossible during the singularity
>
>  > *We are getting very close to that point. *
>>
>
> Maybe, but even if the singularity won't happen for 1000 years 999 years
> from now it will still seem like a long way off because more progress will
> be made in that last year than the previous 999 combined. It's in the
> nature of exponential growth and that's why predictions are virtually
> impossible during that time, the tiniest uncertainty in initial condition
> gets magnified into a huge difference in final outcome.
>
> *> There may be valid logical arguments that disprove the consistency of
>> zombies. For example, can something "know without knowing?" It seems not.*
>>
>
> Even if that's true I don't see how that would help me figure out if
> you're a zombie or not.
>
>
>> > So how does a zombie "know" where to place it's hand to catch a ball,
>> if it doesn't "knowing" what it sees?
>>
>
> If catching a ball is your criteria for consciousness then computers are 
> already
> conscious, and you don't even need a supercomputer, you can make one in
> your own home for a few hundred dollars and some spare parts. Well maybe
> so, I always maintained that consciousness is easy but intelligence is
> hard.
>
> Moving hoop won't let you miss
> <https://www.youtube.com/watch?v=myO8fxhDRW0>
>
> *> For example, wee could rule out many theories and narrow down on those
>> that accept "organizational invariance" as Chalmers defines it. This is the
>> principle that if one entity is consciousness, and another entity is
>> organizationally and functionally equivalent, preserving all the parts and
>> relationships among its parts, then that second entity must be equivalently
>> conscious to the first.*
>>
>
> Personally I think that principle sounds pretty reasonable, but I can't
> prove it's true and never will be able to.
>

Chalmers presents a proof of this in the form of a reductio ad absurdum.

>> I know I can suffer, can you?
>>
>>
>> *>I can tell you that I can.*
>>
>
> So now I know you could generate the ASCII sequence "*I can tell you that
> I can*", but that doesn't answer my question, can you suffer? I don't
> even know if you and I mean the same thing by the word "suffer".
>
>
>> *> You could verify via functional brain scans that I wasn't
>> preprogrammed like an Eliza bot to say I can. You could trace the neural
>> firings in my brain to uncover the origin of my belief that I can suffer,
>> and I could do the same for you.*
>>
>
> No I cannot. Theoretically I could trace the neural firings in your brain
> and figure out how they stimulated the muscles in your hand to type out "*I
> can tell you that I can*"  but that's all I can do. I can't see suffering
> or unhappiness on an MRI scan, although I may be able to trace the nerve
> impulses that stimulate your tear glands to become more active.
>
> *> Could a zombie write a book like Chalmers's "The Consciousness Mind"?*
>>
>
> I don't think so because it takes intelligence to write a book and my
> axiom is that cons

Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread John Clark
On Sat, Jun 19, 2021 at 5:57 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> *What I think is missing in the JKC's idea that intelligence is* *interesting
> and understandable but consciousness isn't, is that he* *leaves out
> values.  Intelligence is define in terms of achieving goals. *[...] *the* 
> *part
> we commonly call 'wisdom' when it works out) is how conflictingvalues are
> resolved.*


But there is nothing unique in the human ability to do that, computers do
that sort of thing all the time. Often there are two values that affect the
rate of a process, one increases the rate and the other decreases it, the
relationship between the two can be quite complex so it's not at all
obvious which will predominate and what the ultima fate of the process will
be, but a computer can calculate it.
John K ClarkSee what's on my new list at  Extropolis


kdf

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3ESeioZj53B8WGsQBhLfowRvC2MQyF0Qcaue7d4fx%2BoQ%40mail.gmail.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread 'Brent Meeker' via Everything List



On 6/19/2021 8:54 AM, Jason Resch wrote:
On Sat, Jun 19, 2021, 6:17 AM smitra <mailto:smi...@zonnet.nl>> wrote:


Information is the key.  Conscious agents are defined by precisely
that
information that specifies the content of their consciousness. 



While I think this is true, I don't know of a consciousness theory 
that is explicit in terms of how information informs a system to 
create a conscious system. Bits sitting on a still hard drive platter 
are not associated with consciousness, are they? Facts sitting idly in 
one's long term memory are not the content of anyone's consciousness, 
are they?


For information to carry meaning, I think requires some system to be 
informed by that information.


It also requires values and the potential for action.  Information has 
to be about something, something that makes a difference to the 
conscious organism.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/6030f8f0-c1a3-7696-6b59-ee7c11565b9f%40verizon.net.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread 'Brent Meeker' via Everything List




On 6/19/2021 8:35 AM, Jason Resch wrote:
You appear to operate according to a "mysterian" view of 
consciousness, which is that we cannot ever know. Several philosophers 
of mind have expressed this, such as Thomas Nagel I believe.


I have some sympathy with this view, but I ask "cannot know what?". What 
is you think there is to know?  If you could look at a brain and from 
that predict how the person with that brain would behave...isn't that 
the same as what we know about gravity and elementary particles.  We 
don't know the ding und sich, but so what?


What I think is missing in the JKC's idea that intelligence is 
interesting and understandable but consciousness isn't, is that he 
leaves out values.  Intelligence is define in terms of achieving goals.  
It's instrumental.  But there's another dimension to thought and 
behavior (not necessarily conscious) which provides the 
motivation/goals/values for intelligence and part of intelligence (the 
part we commonly call 'wisdom' when it works out) is how conflicting 
values are resolved.


Brent
Reason is, and ought only to be the slave of the passions, and can never 
pretend to any other office than to serve and obey them.

    --- David Hume

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/889c691a-0796-f28f-c21b-6a7fea4a1455%40verizon.net.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread 'Brent Meeker' via Everything List
Sorry.  I thought Poincare' effect was a common term, but apparently 
not.  Here's his description starting about half way thru this essay


http://vigeland.caltech.edu/ist4/lectures/Poincare%20Reflections.pdf

Brent

On 6/19/2021 7:52 AM, Jason Resch wrote:



On Fri, Jun 18, 2021, 8:59 PM 'Brent Meeker' via Everything List 
> wrote:




On 6/18/2021 5:16 PM, Jason Resch wrote:
>
> - Is consciousness inherent to any intelligent process?
>
> I think the answer is yes, what do you think?
>
Not just any intelligent process.  But any at human (or even dog)
level.  I think human level consciousness depends on language or
similar
representation in which the entity thinks about decisions by
internally
modelling situations included itself.  Think of how much intelligence
humans bring to bear unconsciously.  Think of the Poincare' effect.



Thanks Brent, I appreciate your answers. But I did not follow what you 
say here regarding the Poincare effect. I did a search on it and 
nothing stood out as related to the brain.


Jason
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi0bO%3DkUVdZ-zRArhC8NNfguTKCnQR6m%3Db%3DRxG9dkmSww%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c0417025-eb22-cd1b-fc18-1efdfd7a97c4%40verizon.net.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread John Clark
On Sat, Jun 19, 2021 at 11:36 AM Jason Resch  wrote:

>> I'm enormously impressed with Deepmind and I'm an optimist regarding AI,
>> but I'm not quite that optimistic.
>>
>
> *>Are you familiar with their Agent 57? -- a single algorithm that
> mastered all 57 Atari games at a super human level, with no outside
> direction, no specification of the rules, and whose only input was the "TV
> screen" of the game.*
>

As I've said, that is very impressive, but even more impressive would be
winning a Nobel prize, or even just being able to diagnose that the problem
with your old car is a broken fan belt, and be able to remove the bad belt
and install a good one, but we're not quite there yet.

*> Also, because of chaos, predicting the future to any degree of accuracy
> requires exponentially more information about the system for each finite
> amount of additional time to simulate, and this does not even factor in
> quantum uncertainty,*
>

And yet many times humans can make predictions that turn out to be better
than random guessing, and a computer should be able to do at least as good,
and I'm certain they will eventually.

>  Being unable to predict the future isn't a good definition of the
> singularity, because we already can't.
>

Not true, often we can make very good predictions, but that will be
impossible during the singularity

 > *We are getting very close to that point. *
>

Maybe, but even if the singularity won't happen for 1000 years 999 years
from now it will still seem like a long way off because more progress will
be made in that last year than the previous 999 combined. It's in the
nature of exponential growth and that's why predictions are virtually
impossible during that time, the tiniest uncertainty in initial condition
gets magnified into a huge difference in final outcome.

*> There may be valid logical arguments that disprove the consistency of
> zombies. For example, can something "know without knowing?" It seems not.*
>

Even if that's true I don't see how that would help me figure out if you're
a zombie or not.


> > So how does a zombie "know" where to place it's hand to catch a ball,
> if it doesn't "knowing" what it sees?
>

If catching a ball is your criteria for consciousness then computers
are already
conscious, and you don't even need a supercomputer, you can make one in
your own home for a few hundred dollars and some spare parts. Well maybe
so, I always maintained that consciousness is easy but intelligence is
hard.

Moving hoop won't let you miss
<https://www.youtube.com/watch?v=myO8fxhDRW0>

*> For example, wee could rule out many theories and narrow down on those
> that accept "organizational invariance" as Chalmers defines it. This is the
> principle that if one entity is consciousness, and another entity is
> organizationally and functionally equivalent, preserving all the parts and
> relationships among its parts, then that second entity must be equivalently
> conscious to the first.*
>

Personally I think that principle sounds pretty reasonable, but I can't
prove it's true and never will be able to.


> >> I know I can suffer, can you?
>
>
> *>I can tell you that I can.*
>

So now I know you could generate the ASCII sequence "*I can tell you that I
can*", but that doesn't answer my question, can you suffer? I don't even
know if you and I mean the same thing by the word "suffer".


> *> You could verify via functional brain scans that I wasn't preprogrammed
> like an Eliza bot to say I can. You could trace the neural firings in my
> brain to uncover the origin of my belief that I can suffer, and I could do
> the same for you.*
>

No I cannot. Theoretically I could trace the neural firings in your brain
and figure out how they stimulated the muscles in your hand to type out "*I
can tell you that I can*"  but that's all I can do. I can't see suffering
or unhappiness on an MRI scan, although I may be able to trace the nerve
impulses that stimulate your tear glands to become more active.

*> Could a zombie write a book like Chalmers's "The Consciousness Mind"?*
>

I don't think so because it takes intelligence to write a book and my axiom
is that consciousness is the inevitable byproduct of intelligence. I can
give reasons why I think the axiom is reasonable and probably true but it
falls short of a proof, that's why it's an axiom.


>
> *> Some have proposed writing philosophical texts on the philosophy of
> mind as a kind of super-Turing test for establishing consciousness.*
>

I think you could do much better than that because it only takes a minimal
amount of intelligence to dream up a new consciousness theory, they're a
dime a dozen, any one of them is as good, or as bad, as another. Good
intel

Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread spudboy100 via Everything List
I agree with Saibal on this and welcome his great explanation. Not to miss out 
on not giving credit where credit is due, let me invoke Donald Hoffman as their 
chief proponent of conscious agents. Or, the best 
known.http://cogsci.uci.edu/~ddhoff/Chapter17Hoffman.pdf


-Original Message-
From: smitra 
To: everything-list@googlegroups.com
Sent: Sat, Jun 19, 2021 7:17 am
Subject: Re: Which philosopher or neuro/AI scientist has the best theory of 
consciousness?

Information is the key.  Conscious agents are defined by precisely that 
information that specifies the content of their consciousness. This 
means that a conscious agent can never be precisely located in some 
physical object, because the information that describes the conscious 
experience will always be less detailed than the information present in 
the exact physical description of an object such a brain. There are 
always going to be a very large self localization ambiguity due to the 
large number of different possible brain states that would generate 
exactly the same conscious experience. So, given whatever conscious 
experience the agent has, the agent could be in a very large number of 
physically distinct states.

The simpler the brain and the algorithm implemented by the brain, the 
larger this self-localization ambiguity becomes because smaller 
algorithms contain less detailed information. Our conscious experiences 
localizes us very precisely on an Earth-like planet in a solar system 
that is very similar to the one we think we live in. But the fly walking 
on the wall of the room I'm in right now may have some conscious 
experience that is exactly identical to that of another fly walking on 
the wall of another house in another country 600 years ago or on some 
rock in a cave 35 million year ago.

The conscious experience of the fly I see on the all is therefore not 
located in the particular fly I'm observing. This is i.m.o. the key 
thing you get from identifying consciousness with information, it makes 
the multiverse an essential ingredient of consciousness. This resolves 
paradoxes you get in thought experiments where you consider simulating a 
brain in a virtual world and then argue that since the simulation is 
deterministic, you could replace the actual computer doing the 
computations by a device playing a recording of the physical brain 
states. This argument breaks down if you take into account the 
self-localization ambiguity and consider that this multiverse aspect is 
an essential part of consciousness due to counterfactuals necessary to 
define the algorithm being realized, which is impossible in a 
deterministic single-world setting.

Saibal


On 18-06-2021 20:46, Jason Resch wrote:
> In your opinion who has offered the best theory of consciousness to
> date, or who do you agree with most? Would you say you agree with them
> wholeheartedly or do you find points if disagreement?
> 
> I am seeing several related thoughts commonly expressed, but not sure
> which one or which combination is right.  For example:
> 
> Hofstadter/Marchal: self-reference is key
> Tononi/Tegmark: information is key
> Dennett/Chalmers: function is key
> 
> To me all seem potentially valid, and perhaps all three are needed in
> some combination. I'm curious to hear what other viewpoints exist or
> if there are other candidates for the "secret sauce" behind
> consciousness I might have missed.
> 
> Jason
> 
>  --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com
> [1].
> 
> 
> Links:
> --
> [1]
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com?utm_medium=email_source=footer

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/bd53588153f2debae241dbb41e48b60a%40zonnet.nl.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/901280005.1412392.1624123221433%40mail.yahoo.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread 'Brent Meeker' via Everything List




On 6/19/2021 4:17 AM, smitra wrote:
Information is the key.  Conscious agents are defined by precisely 
that information that specifies the content of their consciousness. 
This means that a conscious agent can never be precisely located in 
some physical object, because the information that describes the 
conscious experience will always be less detailed than the information 
present in the exact physical description of an object such a brain. 


But that doesn't imply that the content is insufficient to pick out a 
specific brain.  My house can be specified by a street address, even 
though that is far less information required to describe my house.  That 
is possible of different houses to have been at this address doesn't 
change the fact that there is only this one.


Brent

There are always going to be a very large self localization ambiguity 
due to the large number of different possible brain states that would 
generate exactly the same conscious experience. So, given whatever 
conscious experience the agent has, the agent could be in a very large 
number of physically distinct states.


The simpler the brain and the algorithm implemented by the brain, the 
larger this self-localization ambiguity becomes because smaller 
algorithms contain less detailed information. Our conscious 
experiences localizes us very precisely on an Earth-like planet in a 
solar system that is very similar to the one we think we live in. But 
the fly walking on the wall of the room I'm in right now may have some 
conscious experience that is exactly identical to that of another fly 
walking on the wall of another house in another country 600 years ago 
or on some rock in a cave 35 million year ago.


The conscious experience of the fly I see on the all is therefore not 
located in the particular fly I'm observing. This is i.m.o. the key 
thing you get from identifying consciousness with information, it 
makes the multiverse an essential ingredient of consciousness. This 
resolves paradoxes you get in thought experiments where you consider 
simulating a brain in a virtual world and then argue that since the 
simulation is deterministic, you could replace the actual computer 
doing the computations by a device playing a recording of the physical 
brain states. This argument breaks down if you take into account the 
self-localization ambiguity and consider that this multiverse aspect 
is an essential part of consciousness due to counterfactuals necessary 
to define the algorithm being realized, which is impossible in a 
deterministic single-world setting.


Saibal


On 18-06-2021 20:46, Jason Resch wrote:

In your opinion who has offered the best theory of consciousness to
date, or who do you agree with most? Would you say you agree with them
wholeheartedly or do you find points if disagreement?

I am seeing several related thoughts commonly expressed, but not sure
which one or which combination is right.  For example:

Hofstadter/Marchal: self-reference is key
Tononi/Tegmark: information is key
Dennett/Chalmers: function is key

To me all seem potentially valid, and perhaps all three are needed in
some combination. I'm curious to hear what other viewpoints exist or
if there are other candidates for the "secret sauce" behind
consciousness I might have missed.

Jason

 --
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com 


[1].


Links:
--
[1]
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com?utm_medium=email_source=footer 





--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/f0e02cd8-903d-51c4-d878-9a9e3dbc317b%40verizon.net.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread Jason Resch
On Sat, Jun 19, 2021, 6:17 AM smitra  wrote:

> Information is the key.  Conscious agents are defined by precisely that
> information that specifies the content of their consciousness.


While I think this is true, I don't know of a consciousness theory that is
explicit in terms of how information informs a system to create a conscious
system. Bits sitting on a still hard drive platter are not associated with
consciousness, are they? Facts sitting idly in one's long term memory are
not the content of anyone's consciousness, are they?

For information to carry meaning, I think requires some system to be
informed by that information. What then is the key to an informable system?
Differentiation? Comparison? Conditional statement? Counterfactual states?


This
> means that a conscious agent can never be precisely located in some
> physical object, because the information that describes the conscious
> experience will always be less detailed than the information present in
> the exact physical description of an object such a brain. There are
> always going to be a very large self localization ambiguity due to the
> large number of different possible brain states that would generate
> exactly the same conscious experience.


This is a fascinating line of reasoning, easily provable via information
theory, abd having huge implications.


So, given whatever conscious
> experience the agent has, the agent could be in a very large number of
> physically distinct states.
>
> The simpler the brain and the algorithm implemented by the brain, the
> larger this self-localization ambiguity becomes because smaller
> algorithms contain less detailed information.


I recently had a thought about what it is like to be a thermostat, and came
to the conclusion that it's probably like being any one of a billion
different creatures slowly arousing from sleep. It's hard to square the
stability of experience when there's no elements of that experience to lock
you down to existing in a stable continuous state.


Our conscious experiences
> localizes us very precisely on an Earth-like planet in a solar system
> that is very similar to the one we think we live in. But the fly walking
> on the wall of the room I'm in right now may have some conscious
> experience that is exactly identical to that of another fly walking on
> the wall of another house in another country 600 years ago or on some
> rock in a cave 35 million year ago.
>
> The conscious experience of the fly I see on the all is therefore not
> located in the particular fly I'm observing.


Mind-blowing...


This is i.m.o. the key
> thing you get from identifying consciousness with information, it makes
> the multiverse an essential ingredient of consciousness. This resolves
> paradoxes you get in thought experiments where you consider simulating a
> brain in a virtual world and then argue that since the simulation is
> deterministic, you could replace the actual computer doing the
> computations by a device playing a recording of the physical brain
> states. This argument breaks down if you take into account the
> self-localization ambiguity and consider that this multiverse aspect is
> an essential part of consciousness due to counterfactuals necessary to
> define the algorithm being realized, which is impossible in a
> deterministic single-world setting.
>

I'm not sure I follow the necessity of a multiverse to discuss
counterfactuals, but I do agree counterfactuals seem necessary to systems
that are "informable".

Jason


> Saibal
>
>
> On 18-06-2021 20:46, Jason Resch wrote:
> > In your opinion who has offered the best theory of consciousness to
> > date, or who do you agree with most? Would you say you agree with them
> > wholeheartedly or do you find points if disagreement?
> >
> > I am seeing several related thoughts commonly expressed, but not sure
> > which one or which combination is right.  For example:
> >
> > Hofstadter/Marchal: self-reference is key
> > Tononi/Tegmark: information is key
> > Dennett/Chalmers: function is key
> >
> > To me all seem potentially valid, and perhaps all three are needed in
> > some combination. I'm curious to hear what other viewpoints exist or
> > if there are other candidates for the "secret sauce" behind
> > consciousness I might have missed.
> >
> > Jason
> >
> >  --
> > You received this message because you are subscribed to the Google
> > Groups "Everything List" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> > an email to everything-list+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxA

Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread Jason Resch
On Sat, Jun 19, 2021, 5:55 AM John Clark  wrote:

> On Fri, Jun 18, 2021 at 8:17 PM Jason Resch  wrote:
>
> *>Deepmind has succeeded in building general purposes learning algorithms.
>> Intelligence is mostly a solved problem,*
>>
>
> I'm enormously impressed with Deepmind and I'm an optimist regarding AI,
> but I'm not quite that optimistic.
>

Are you familiar with their Agent 57? -- a single algorithm that mastered
all 57 Atari games at a super human level, with no outside direction, no
specification of the rules, and whose only input was the "TV screen" of the
game.


If intelligence was a solved problem the world would change beyond all
> recognition and we'd be smack in the middle of the Singularity, and we're
> obviously not because at least to some degree future human events are still
> somewhat predictable.
>

The algorithms are known, but the computational power is not there yet. Our
top supercomputer only recently broke the computing power of one human
brain.

Also, because of chaos, predicting the future to any degree of accuracy
requires exponentially more information about the system for each finite
amount of additional time to simulate, and this does not even factor in
quantum uncertainty, nor uncertainty about oneself and own mind. Being
unable to predict the future isn't a good definition of the singularity,
because we already can't. You might say the singularity is when most
decisions are no longer made by biological intelligences, again arguably we
have reached that point. I prefer the definition of when we have a single
nonbiological intelligence that exceeds the intelligence of any human in
any domain. We are getting very close to that point. That may not be the
point of an intelligence explosion, but it means one cannot be far off.



> > *But questions of consciousness are no less important nor less
>> pressing:*
>> *Is this uploaded brain conscious or a zombie?*
>>
>
> I don't know, are you conscious or a zombie?
>

There may be valid logical arguments that disprove the consistency of
zombies. For example, can something "know without knowing?" It seems not.
So how does a zombie "know" where to place it's hand to catch a ball, if it
doesn't "knowing" what it sees?

A single result on the possibility or impossibility of zombies would enable
massive progess in theories of consciousness.

For example, wee could rule out many theories and narrow down on those that
accept "organizational invariance" as Chalmers defines it. This is the
principle that if one entity is consciousness, and another entity is
organizationally and functionally equivalent, preserving all the parts and
relationships among its parts, then that second entity must be equivalently
conscious to the first.



> > *Can (bacterium, protists, plants, jellyfish, worms, clams, insects,
>> spiders, crabs, snakes, mice, apes, humans) suffer?*
>>
>
> I don't know, I know I can suffer, can you?
>

I can tell you that I can. You could verify via functional brain scans that
I wasn't preprogrammed like an Eliza bot to say I can. You could trace the
neural firings in my brain to uncover the origin of my belief that I can
suffer, and I could do the same for you.




>
>> > *Are these robot slaves conscious?*
>>
>
> Are you conscious?
>

Could a zombie write a book like Chalmers's "The Consciousness Mind"? Some
have proposed writing philosophical texts on the philosophy of mind as a
kind of super-Turing test for establishing consciousness.

When GPT-X writes new philosophical treatises on topics of consciousness
and when it insists it is conscious, and we trace the origins of this
statement to a tangled self-reference loop in its processing, what are we
to conclude? Would it become immoral to turn it off at that point?


>
>> * > Do they have likes or dislikes that we repress?*
>>
>
> What's with this "we" business?
>


Humanity I mean.


> > *When does a developing human become conscious?*
>>
>
> Other than in my case does any developing human EVER become conscious?
>
> > *Is that person in a coma or locked-in?*
>>
>
> I don't know, are you locked in?
>

I can move, so no. Being locked in means you are conscious but lack any
control over your body.


> > *Does this artificial retina/visual cortex provide the same visual
>> experiences?*
>>
>
> The same as what?
>

A biological retina and visual cortex.


>
>> > *Does this particular anesthetic block consciousness or merely memory
>> formation?*
>>
>
> Did the person have consciousness even before the administration of the
> anesthetic?
>

Let's assume so for the purposes of the question. Wouldn't you prefer the
anesthetic that knocks you out vs. the

Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread Jason Resch
On Fri, Jun 18, 2021, 8:59 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/18/2021 5:16 PM, Jason Resch wrote:
> >
> > - Is consciousness inherent to any intelligent process?
> >
> > I think the answer is yes, what do you think?
> >
> Not just any intelligent process.  But any at human (or even dog)
> level.  I think human level consciousness depends on language or similar
> representation in which the entity thinks about decisions by internally
> modelling situations included itself.  Think of how much intelligence
> humans bring to bear unconsciously.  Think of the Poincare' effect.
>


Thanks Brent, I appreciate your answers. But I did not follow what you say
here regarding the Poincare effect. I did a search on it and nothing stood
out as related to the brain.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi0bO%3DkUVdZ-zRArhC8NNfguTKCnQR6m%3Db%3DRxG9dkmSww%40mail.gmail.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread John Clark
Suppose there is an AI that behaves more intelligently than the most
intelligent human who ever lived, however when the machine is opened up to
see how this intelligence is actually achieved one consciousness theory
doesn't like what it sees and concludes that despite its great intelligence
it is not conscious, but a rival consciousness theory does like what it
sees and concludes it is conscious. Both theories can't be right although
both could be wrong, so how on earth could you ever determine which, if
any, of the 2 consciousness theories are correct?

John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
qno
yrm

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv241V2Kw2L%3DsUUGrFrhc8684TGzi%3DRC_yHm-_1rez%2BC_w%40mail.gmail.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread smitra
Information is the key.  Conscious agents are defined by precisely that 
information that specifies the content of their consciousness. This 
means that a conscious agent can never be precisely located in some 
physical object, because the information that describes the conscious 
experience will always be less detailed than the information present in 
the exact physical description of an object such a brain. There are 
always going to be a very large self localization ambiguity due to the 
large number of different possible brain states that would generate 
exactly the same conscious experience. So, given whatever conscious 
experience the agent has, the agent could be in a very large number of 
physically distinct states.


The simpler the brain and the algorithm implemented by the brain, the 
larger this self-localization ambiguity becomes because smaller 
algorithms contain less detailed information. Our conscious experiences 
localizes us very precisely on an Earth-like planet in a solar system 
that is very similar to the one we think we live in. But the fly walking 
on the wall of the room I'm in right now may have some conscious 
experience that is exactly identical to that of another fly walking on 
the wall of another house in another country 600 years ago or on some 
rock in a cave 35 million year ago.


The conscious experience of the fly I see on the all is therefore not 
located in the particular fly I'm observing. This is i.m.o. the key 
thing you get from identifying consciousness with information, it makes 
the multiverse an essential ingredient of consciousness. This resolves 
paradoxes you get in thought experiments where you consider simulating a 
brain in a virtual world and then argue that since the simulation is 
deterministic, you could replace the actual computer doing the 
computations by a device playing a recording of the physical brain 
states. This argument breaks down if you take into account the 
self-localization ambiguity and consider that this multiverse aspect is 
an essential part of consciousness due to counterfactuals necessary to 
define the algorithm being realized, which is impossible in a 
deterministic single-world setting.


Saibal


On 18-06-2021 20:46, Jason Resch wrote:

In your opinion who has offered the best theory of consciousness to
date, or who do you agree with most? Would you say you agree with them
wholeheartedly or do you find points if disagreement?

I am seeing several related thoughts commonly expressed, but not sure
which one or which combination is right.  For example:

Hofstadter/Marchal: self-reference is key
Tononi/Tegmark: information is key
Dennett/Chalmers: function is key

To me all seem potentially valid, and perhaps all three are needed in
some combination. I'm curious to hear what other viewpoints exist or
if there are other candidates for the "secret sauce" behind
consciousness I might have missed.

Jason

 --
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com
[1].


Links:
--
[1]
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com?utm_medium=email_source=footer


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/bd53588153f2debae241dbb41e48b60a%40zonnet.nl.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-19 Thread John Clark
On Fri, Jun 18, 2021 at 8:17 PM Jason Resch  wrote:

*>Deepmind has succeeded in building general purposes learning algorithms.
> Intelligence is mostly a solved problem,*
>

I'm enormously impressed with Deepmind and I'm an optimist regarding AI,
but I'm not quite that optimistic. If intelligence was a solved problem the
world would change beyond all recognition and we'd be smack in the middle
of the Singularity, and we're obviously not because at least to some degree
future human events are still somewhat predictable.

> *But questions of consciousness are no less important nor less pressing:*
> *Is this uploaded brain conscious or a zombie?*
>

I don't know, are you conscious or a zombie?

> *Can (bacterium, protists, plants, jellyfish, worms, clams, insects,
> spiders, crabs, snakes, mice, apes, humans) suffer?*
>

I don't know, I know I can suffer, can you?


> > *Are these robot slaves conscious?*
>

Are you conscious?


> * > Do they have likes or dislikes that we repress?*
>

What's with this "we" business?

> *When does a developing human become conscious?*
>

Other than in my case does any developing human EVER become conscious?

> *Is that person in a coma or locked-in?*
>

I don't know, are you locked in?

> *Does this artificial retina/visual cortex provide the same visual
> experiences?*
>

The same as what?


> > *Does this particular anesthetic block consciousness or merely memory
> formation?*
>

Did the person have consciousness even before the administration of the
anesthetic?


> *> These questions remain unsettled*
>

Yes, and these questions will remain unsettled till the end of time, so
even if time is infinite it could be better spent pondering other questions
that actually have answers.


> *>If none of these questions interest you, perhaps this one will: Is
> consciousness inherent to any intelligent process?*


I have no proof and never will have any, however I must assume that the
above is true because I simply could not function if I really believed that
solipsism was correct and I was the only conscious being in the universe.
Therefore I take it as an axiom that intelligent behavior implies
consciousness.

John K ClarkSee what's on my new list at  Extropolis

qno

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3X7AraSnEr%3DyT4tmREM49nYHbT8FSX9Mzc_OxPxTdGQg%40mail.gmail.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-18 Thread 'Brent Meeker' via Everything List




On 6/18/2021 5:16 PM, Jason Resch wrote:


- Is consciousness inherent to any intelligent process?

I think the answer is yes, what do you think?

Not just any intelligent process.  But any at human (or even dog) 
level.  I think human level consciousness depends on language or similar 
representation in which the entity thinks about decisions by internally 
modelling situations included itself.  Think of how much intelligence 
humans bring to bear unconsciously.  Think of the Poincare' effect.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/dd8fc962-56ed-c7a3-9e45-ef1c450a0026%40verizon.net.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-18 Thread 'Brent Meeker' via Everything List
I'm most with Dennett.  I see consciousness as having several different 
levels, which are also different levels of self-reference.  At the 
lowest level even bacteria recognize (in the functional/operational 
sense) a distinction between "me" and "everything else".  A little above 
that, some that are motile also sense chemical gradients and can move 
toward food.  So they distinguish "better else" from "worse else".  At a 
higher level, animals and plants with sensors know more about their 
surroundings. Animals know a certain amount of geometry and are aware of 
their place in the world.  How close or far things are.  Some animals, 
mostly those with eyes, employ foresight and planning in which they 
forsee outcomes for themselves.  They can think of themselves in 
relation to other animals.  More advanced social animals are aware of 
their social status.  Humans, perhaps thru the medium of language, have 
a theory of mind, i.e. they can think about what other people think and 
attribute agency to them (and to other things) as part of their 
planning.  The conscious part of all this awareness is essentially that 
which is processed as language and image; ultimately only a small part.


Brent

On 6/18/2021 11:46 AM, Jason Resch wrote:
In your opinion who has offered the best theory of consciousness to 
date, or who do you agree with most? Would you say you agree with them 
wholeheartedly or do you find points if disagreement?


I am seeing several related thoughts commonly expressed, but not sure 
which one or which combination is right.  For example:


Hofstadter/Marchal: self-reference is key
Tononi/Tegmark: information is key
Dennett/Chalmers: function is key

To me all seem potentially valid, and perhaps all three are needed in 
some combination. I'm curious to hear what other viewpoints exist or 
if there are other candidates for the "secret sauce" behind 
consciousness I might have missed.


Jason

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
<mailto:everything-list+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com 
<https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com?utm_medium=email_source=footer>.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/cdc372c6-2579-fa64-2a26-bf8c1ce33c56%40verizon.net.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-18 Thread Jason Resch
On Fri, Jun 18, 2021, 2:37 PM John Clark  wrote:

> On Fri, Jun 18, 2021 at 2:46 PM Jason Resch  wrote:
>
> *>In your opinion who has offered the best theory of consciousness to
>> date, or who do you agree with most?*
>
>
> One consciousness theory is as good as another because there are no facts
> such a theory must fit. About all I can say is consciousness seems to be
> the way data feels when it is being processed but I have no idea why that
> is true, it may be meaningless to even ask "why' in this case because it's
> probably just a brute fact.  That's why I'm far more interested in
> intelligence theories than consciousness theories; there are ways to
> judge the quality of an intelligence theory but there's no way to do that
> with a consciousness theory.
>

Deepmind has succeeded in building general purposes learning algorithms.
Intelligence is mostly a solved problem, for at least almost all
capabilities of human intelligence. I wrote an article detailing this
recently:

https://alwaysasking.com/when-will-ai-take-over/

But questions of consciousness are no less important nor less pressing:

- Is this uploaded brain conscious or a zombie?
- Can (bacterium, protists, plants, jellyfish, worms, clams, insects,
spiders, crabs, snakes, mice, apes, humans) suffer?
- Are these robot slaves conscious? Do they have likes or dislikes that we
repress?
- When does a developing human become conscious?
- Is that person in a coma or locked-in?
- Does this artificial retina/visual cortex provide the same visual
experiences?
- Does this particular anesthetic block consciousness or merely memory
formation?

These questions remain unsettled due to the lack of a widely held and
established theory of consciousness. Answers to these questions would be
quite valuable, as we could take steps to reduce harm, and avoid
potentially zombifying our future civilization should we upload in a way
that doesn't preserve our conscious minds (if you believe such a thing is
possible).

If none of these questions interest you, perhaps this one will:

- Is consciousness inherent to any intelligent process?

I think the answer is yes, what do you think?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhOMOtVSu5BugC-yAzSnnLnLk-q9%3Dt%3Dq1BBBRGk-N-E_w%40mail.gmail.com.


Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-18 Thread John Clark
On Fri, Jun 18, 2021 at 2:46 PM Jason Resch  wrote:

*>In your opinion who has offered the best theory of consciousness to date,
> or who do you agree with most?*


One consciousness theory is as good as another because there are no facts
such a theory must fit. About all I can say is consciousness seems to be
the way data feels when it is being processed but I have no idea why that
is true, it may be meaningless to even ask "why' in this case because it's
probably just a brute fact.  That's why I'm far more interested in
intelligence theories than consciousness theories; there are ways to judge
the quality of an intelligence theory but there's no way to do that with a
consciousness theory.

John K ClarkSee what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
rav

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv205byHggZv%2BzFyQQ3-V9-wogD_EgUN5WSOSBfrnHKcUA%40mail.gmail.com.


Which philosopher or neuro/AI scientist has the best theory of consciousness?

2021-06-18 Thread Jason Resch
In your opinion who has offered the best theory of consciousness to date,
or who do you agree with most? Would you say you agree with them
wholeheartedly or do you find points if disagreement?

I am seeing several related thoughts commonly expressed, but not sure which
one or which combination is right.  For example:

Hofstadter/Marchal: self-reference is key
Tononi/Tegmark: information is key
Dennett/Chalmers: function is key

To me all seem potentially valid, and perhaps all three are needed in some
combination. I'm curious to hear what other viewpoints exist or if there
are other candidates for the "secret sauce" behind consciousness I might
have missed.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com.


Information Closure Theory of Consciousness

2020-06-10 Thread Philip Thrift

https://arxiv.org/abs/1909.13045
Information Closure Theory of Consciousness
Acer Y.C. Chang 
<https://arxiv.org/search/q-bio?searchtype=author=Chang%2C+A+Y>, Martin 
Biehl <https://arxiv.org/search/q-bio?searchtype=author=Biehl%2C+M>, Yen 
Yu <https://arxiv.org/search/q-bio?searchtype=author=Yu%2C+Y>, Ryota 
Kanai <https://arxiv.org/search/q-bio?searchtype=author=Kanai%2C+R>

Information processing in neural systems can be described and analysed at 
multiple spatiotemporal scales. Generally, information at lower levels is 
more fine-grained and can be coarse-grained in higher levels. However, 
information processed only at specific levels seems to be available for 
conscious awareness. We do not have direct experience of information 
available at the level of individual neurons, which is noisy and highly 
stochastic. Neither do we have experience of more macro-level interactions 
such as interpersonal communications. Neurophysiological evidence suggests 
that conscious experiences co-vary with information encoded in 
coarse-grained neural states such as the firing pattern of a population of 
neurons. In this article, we introduce a new informational theory of 
consciousness: Information Closure Theory of Consciousness (ICT). We 
hypothesise that conscious processes are processes which form non-trivial 
informational closure (NTIC) with respect to the environment at certain 
coarse-grained levels. This hypothesis implies that conscious experience is 
confined due to informational closure from conscious processing to other 
coarse-grained levels. ICT proposes new quantitative definitions of both 
conscious content and conscious level. With the parsimonious definitions 
and a hypothesise, ICT provides explanations and predictions of various 
phenomena associated with consciousness. The implications of ICT naturally 
reconciles issues in many existing theories of consciousness and provides 
explanations for many of our intuitions about consciousness. Most 
importantly, ICT demonstrates that information can be the common language 
between consciousness and physical reality.



@philipthrift

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/909d6058-c4da-41d6-91fe-11ae5dcd429fo%40googlegroups.com.


Re: The Easy Part of the Hard Problem: A Resonance Theory of Consciousness

2020-01-31 Thread Bruno Marchal

> On 29 Jan 2020, at 01:12, Philip Thrift  wrote:
> 
> 
> The Easy Part of the Hard Problem: A Resonance Theory of Consciousness
> Tam Hunt and Jonathan W. Schooler
> Psychological and Brain Sciences, University of California, Santa Barbara, 
> Santa Barbara, CA
> 
> https://www.frontiersin.org/articles/10.3389/fnhum.2019.00378/full 
> <https://www.frontiersin.org/articles/10.3389/fnhum.2019.00378/full>


Computationalism does not refute such theories, but shows that if they are 
true, you need to explain the resonance and the material constitution of the 
numbers from … a theory of consciousness. So as an axoplantion of consciousness 
that becomes circular. That does not mean false, but that means very 
incomplete. As it is unclear such theories solve anything, I remain skeptical. 
It is easier to explain the existence of vibrations and their necessity from 
the numbers self-reference, and that gives a means to exploit the difference 
between true and rationally justifiable to distinguish the quanta from the 
qualia (and that predicts already the qualitative and some quantitative part of 
what we observe and feel.

Bruno




> 
> @philipthrift
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/535c75fa-cc49-477d-84b1-e7efacf34e61%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/everything-list/535c75fa-cc49-477d-84b1-e7efacf34e61%40googlegroups.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/D29F0C67-6B2C-4B74-A46E-5806308F0ADA%40ulb.ac.be.


The Easy Part of the Hard Problem: A Resonance Theory of Consciousness

2020-01-28 Thread Philip Thrift

*The Easy Part of the Hard Problem: A Resonance Theory of Consciousness*
Tam Hunt and Jonathan W. Schooler
Psychological and Brain Sciences, University of California, Santa Barbara, 
Santa Barbara, CA

https://www.frontiersin.org/articles/10.3389/fnhum.2019.00378/full

@philipthrift

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/535c75fa-cc49-477d-84b1-e7efacf34e61%40googlegroups.com.


Re: Michael Graziano's theory of consciousness

2015-04-20 Thread Kim Jones


 On 16 Apr 2015, at 11:03 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 
 But those who gave me the price in Paris 

The PRIZE

K

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-20 Thread Bruno Marchal


On 17 Apr 2015, at 09:15, Bruce Kellett wrote:


Stathis Papaioannou wrote:
On Friday, April 17, 2015, Bruce Kellett bhkell...@optusnet.com.au  
mailto:bhkell...@optusnet.com.au wrote:

   meekerdb wrote:
   On 4/15/2015 11:16 PM, Bruce Kellett wrote:
   LizR wrote:
snip
Physicalism reduces to computationalism if the physics in the brain  
is Turing emulable, and then if you follow Bruno's reasoning in the  
UDA computationalism leads to elimination of a primary physical  
world.


But physics itself is not Turing emulable. The no-cloning theorem of  
quantum physics precludes it.


If you study a proof, you should not add an hypothesis. We don't  
assume quantum mechanics (indeed we have to derive it from comp,  
assuming QM is correct empirically).


Anyway, at step seven, you can already understand that non cloning is  
predicted by computationalism, so QM non-cloning confirms  
computationalism. The argument works even if the brain is a quantum  
computer. It works for anything not violating Church's thesis.


Bruno






Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-18 Thread Bruno Marchal


On 18 Apr 2015, at 04:45, meekerdb wrote:


On 4/17/2015 12:37 AM, Bruno Marchal wrote:


On 16 Apr 2015, at 21:54, meekerdb wrote:


On 4/16/2015 1:36 AM, Bruno Marchal wrote:
So consciousness is not 1-duplicable, but can be considered as  
having been duplicated in some 3-1-view, even before it diverges.


How can it be considered duplicated before it diverges?


By associating it with different token of the machinery  
implementing it.




Are you assuming consciousness is physical and so having  
different spacetime location can distinguish two otherwise  
indiscernible set of thoughts?


No, it can't, in the 1-view, but it can in the 3-1 view. OK?


I'm not sure what 3-1 view means,


It is the content of the diary of the observer outside the box  
(like in sane04), but on the consciousness of the copies involved.  
This is for the people who say that they will be conscious in W and  
M. That is true, but the pure 1-view is that they will be  
conscious in only one city (even if that happens in both cities).


In the math part, this is captured by [1][0]A, with [0]A = the  
usual bewesibar of Gödel, and [1]A = [0]A  A (Theaetetus).



but if you mean in the sense of running on two different machines  
then I agree.


OK.


That means duplication of consciousness/computation depends on  
distinguishability of the physical substrate with no distinction  
in the consciousness/computation.


OK. Like when the guy has already been multiplied in W and M, but  
has not yet open the door. We assume of course that the two boxes  
are identical from inside, no windows, and the air molecules at the  
same place (for example). That can be made absolutely identical in  
the step 6, where the reconstitutions are made in a virtual  
environment.




  But is that the duplication envisioned in the M-W thought  
experiment?


Yes, at different steps.




I find I'm confused about that.  In our quantum-mechanical world  
it is impossible to duplicate something in an unknown state.  One  
could duplicate a human being in the rough classical sense of  
structure at the molecular composition level, but not the  
molecular states.  Such duplicates would be similar as I'm similar  
to myself of yesterday - but they would instantly diverge in  
thoughts, even without seeing Moscow or Washington.


In practice, yes. Assuming the duplication is done in a real world,  
and assuming QM. But in step six, you can manage the environments  
to be themselves perfectly emulated and 100% identical. That is all  
what is needed for the reasoning.





Yet it seems Bruno's argument is based on deterministic computation


At my substitution level. But this will entail that the real world,  
whatever it can be, is non deterministic. We WM duplicate on all  
the different computations in the UD* (in arithmetic) which go  
through my local current state.



and requires the duplication and subsequent thoughts to be  
duplicates at a deterministic classcial level so that the M-man  
and W-man on diverge in thought when they see different things in  
their respective cities.


Yes. That is why the H-man cannot predict which divergence he will  
live. But sometimes we mention the state of the person before he or  
she open the doors, for example to address a question like would a  
tiny oxygen atom in the box makes a difference in the measure or  
not,  Here there is almost a matter of convention to say that  
there are two or one consciousness. We can ascribe consciousness to  
the different people in the different box, but that is a 3-1 view.  
The 1-views feels to be in once city, and not in the other.


No, things like radioactive decay of K40 atoms is the blood will  
very quickly cause the W-man and M-man to diverge no matter how  
precisely the duplicate recievers are made.


I agree.

But I'm not sure why this would matter to your argument?  Is it  
important to the argument that they diverge *only* because of a  
difference in perception?


Normally they should diverge, if they have a different future (despite  
having the same perception before the divergence), by the rule Y = II.  
But it is an open problem, and I use the self-reference logic to go  
around that problem, and to avoid question like that. I will think  
about finding a thought experience which leads to different answers  
for the probability if we accept or not the Y = II. I have some in my  
note, but I have not really the time now. (The deadline for my paper  
is Monday, but I have my course today, + some paper to review, also.  
In few days I will have more time). It is not very important for the  
present threadwe have discussed this a long time ago: as long as  
the perception is identical, you can fuse the person again, and I want  
to avoid something like making the measure dependent of the diameter  
of the neuron axons).


Bruno




Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To 

Re: Michael Graziano's theory of consciousness

2015-04-18 Thread Bruno Marchal


On 18 Apr 2015, at 06:07, meekerdb wrote:


On 4/17/2015 5:35 AM, Bruce Kellett wrote:
If you had an actual Turing machine and unlimited time, you could  
by brute force emulate everything. However, that is not the point.  
If you car needs a part replaced, you don't need to get a  
replacement exactly the same down to the quantum level. This is  
the case every machine, and there is no reason to believe  
biological machines are different: infinite precision parts would  
mean zero robustness.


I think you miss the point. If you want to emulate a car or a  
biological machine, then some classical level of exactness would  
suffice. But the issue is the wider program that wants to see the  
physical world in all its detail emerge from the digital  
computations of the dovetailer. If that is your goal, then you need  
to emulate the finest details of quantum mechanics. This latter is  
not possible on a Turing machine because of the theorem forbidding  
the cloning of a quantum state. Quantum mechanics is, after all,  
part of the physical world we observe.


But the goal is not to emulate an existing physical world, it's to  
instantiate a physical world as a computation.


Well, to recover the apperance of a physical worlds and its ability  
from a sum on computations in the UD.




There's no requirement to measure a quantum state and reproduce it.


The UD go through all digital quantum state. The UD prepares all  
quantum states, and is not obliged to duplicate them in the relative  
way. This can be used to derive non-cloning from arithmetic. If  
matter was duplicable, comp would be false.


Bruno





Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-18 Thread Bruno Marchal


On 18 Apr 2015, at 06:53, meekerdb wrote:


On 4/17/2015 12:38 PM, Bruno Marchal wrote:


Comp explains why this is impossible. The finest details of physics  
are given by sum on many computations, the finer the details, the  
more there are. To get the numbers right up to infinite decimals,  
you need to run the entire dovetailer in a finite time. We can't do  
that.


?? The UD runs in Platonia, so what does a finite time refer to?


In platonia, you can define a notion of time for a computation, by  
the number of steps done by that computation.


But here I was referring to the physical time used by someone living  
in the physical reality, assuming comp works and recover such physical  
time, and trying to predict with infinite accuracy the result of some  
experience by computing it. To get all decimals correct, it has to do  
emulate the whole UD in that physical reality, (and, btw, to know his  
substitution level, which is impossible in practice, so he must use  
some bet on it, like the doctor did).


Bruno





Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Bruno Marchal


On 16 Apr 2015, at 21:18, meekerdb wrote:


On 4/16/2015 1:01 AM, Bruno Marchal wrote:


On 16 Apr 2015, at 05:33, Bruce Kellett wrote:


LizR wrote:
On 16 April 2015 at 12:53, Bruce Kellett  
bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au  
wrote:

  LizR wrote:
  On 15 April 2015 at 10:15, John Clark johnkcl...@gmail.com
  mailto:johnkcl...@gmail.com mailto:johnkcl...@gmail.com
  mailto:johnkcl...@gmail.com wrote:
  Yes but I'm confused, I though you were the one arguing  
that

  Bruno
  had discovered something new under the sun, a new sort of
  uncertainty That's hardly what Bruno is claiming.  
Step 3 is only a small
  step in a logical argument. It shows that if our normal  
everyday

  consciousness is the result of computation, then it can be
  duplicated (in principle - if you have a problem with matter
  duplicators, consider an AI programme) and that this leads to
  what looks like uncertainty from one person's perspective.
  You only get that impression because in Bruno's treatment of the
  case -- the two copies are immediately separated by a large  
distance
  and don't further interact. You might come to a different  
conclusion

  if you let the copies sit down together and have a chat.
That doesn't make any difference to the argument. Will I be the  
copy sitting in the chair on the left? is less dramatic than  
Will I be transported to Moscow or Washington? and hence, I  
suspect, might not make the point so clearly. But otherwise the  
argument goes through either way.


No, because as I argued elsewhere, the two 'copies' would not  
agree that they were the same person.



  Separating them geographically was meant to mimic the different
  worlds idea from MWI. But I think that is a bit of a cheat.
I don't know where Bruno says he's mimicking the MWI (at this  
stage) ? This is a classical result, assuming classical  
computation (which according to Max Tegmark is a reasonable  
assumption for brains).


In the protracted arguments with John Clark, the point was  
repeated made that he accepted FPI for MWI, so why not for Step 3.  
Step 3 is basically to introduce the idea of FPI, and hence form a  
link with the MWI of quantum mechanics. This may not always have  
been made explicit, but the intention is clear.


It is not made at all. people who criticize UDA always criticize  
what they add themselves to the reasoning. This is not valid.  
People who does that criticize only themselves, not the argument  
presented.



Step 3 does not succeed in this because the inference to FPI  
depends on a flawed concept of personal identity.



Step 3 leads to the FPI, and to see what happens next, there is  
step 4, 5, 6, 7 and 8. Then the translation in arithmetic show how  
to already extract the logic of the observable so that we might  
refute a form of comp (based on comp + the classical theory of  
knowledge). That main point there is that incompleteness refutes  
Socrates argument against the Theaetetus,


Which argument do you refer to?  Theaetetus puts forward several  
theories of knowledge which Socrates attempts to refute.


That's true.
I was referring to the definition of knowledge by true justified  
opinion: the passage from []A (rational opinion, justified  
proposition)  to []A  A (justified opinion which is also true).


Incompleteness (the impossibility to prove []f - f) gives an  
arithmetical sense to that move, as the logic of []A, which is G, will  
obey to a different logic than the logic of []A  A.  []f does not  
imply f, from the machine's view, but []f  f does trivially imply f.


Bruno





Brent

and we can almost directly retrieve the Parmenides-Plotinus  
theology in the discourse of the introspecting universal (Löbian)  
machine.


Bruno




Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/





--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Stathis Papaioannou
On Friday, April 17, 2015, Bruce Kellett bhkell...@optusnet.com.au wrote:

 meekerdb wrote:

 On 4/15/2015 11:16 PM, Bruce Kellett wrote:

 LizR wrote:

 On 16 April 2015 at 15:37, Bruce Kellett bhkell...@optusnet.com.au
 mailto:bhkell...@optusnet.com.au wrote:

 Bruno has said to me that one cannot refute a scientific finding by
 philosophy. One cannot, of course, refute a scientific observation
 by philosophy, but one can certainly enter a philosophical
 discussion of the meaning and interpretation of an observation. In
 an argument like Bruno's, one can certainly question the
 metaphysical and other presumptions that go into his discourse.

 Yes, of course. I don't think anyone is denying that - quite the
 reverse, people who argue /against /Bruno often do so on the basis of
 unexamined metaphysical assumptions (like primary materialism)


 And the contrary, that primary materialism is false, is just as much an
 unevidenced metaphysical assumption.


 But it's not an assumption in Bruno's argument. As I understand it, his
 theory in outline is:

 1. All our thoughts are certain kinds of computations.
 2. The physical world is an inference from our thoughts.
 3. Computations are abstract relations among mathematical objects.
 4. The physical world is instantiated by those computations that
 correspond to intersubjective agreement in our thoughts.
 5. Our perceptions/thoughts/beliefs about the world are modeled by
 computed relations between the computed physics and our computed thoughts.
 6. The UD realizes (in Platonia) all possible computations and so
 realizes the model 1-5 in all possible ways and this produces the
 multiple-worlds of QM, plus perhaps infinitely many other worlds which he
 hopes to show have low measure.


 I leave it to Bruno to comment on whether this is a fair summary of his
 theory. I have difficulties with several of the points you list. But that
 aside, one has to ask exactly what has bee achieved, even if all of this
 goes through? I do not think it explains consciousness. It seems to stem
 from the idea that consciousness is a certain type of computation (that can
 be emulated in a universal Turing machine, or general purpose computer.)
 This is then developed as a form of idealism (2 above) to argue that the
 physical world and our perceptions, thoughts, and beliefs about that world,
 are also certain types of computations.

 But this is no nearer to an explanation of consciousness than the
 alternative model of assuming a primitive physical universe and arguing
 that consciousness supervenes on the physical structure of brains, and that
 mathematics is an inference from our physical experiences. Consciousness
 supervenes on computations? What sort of computation? Why on this sort and
 not any other sort? Similar questions arise in the physicalist account of
 course, but proposing a new theory that does not answer any of the
 questions posed by the original theory does not seem like an advance to me.
 At least physicalism has evolutionary arguments open to it as an
 explanation of consciousness

 The physicalist model has the advantage that it gives the physical world
 directly -- physics does not have to be constructed from some abstract
 computations in Platonia (even if such a concept can be given any meaning.)
 If you take the degree of agreement with observation as the measure of
 success of a theory, then physicalism wins hands down. Bruno's theory does
 not currently produce any real physics at all.

 The discussion of the detailed steps in the argument Bruno gives is merely
 a search for clarification. As I have said, many things seem open to
 philosophical discussion, and some of Bruno's definitions seem
 self-serving. When I seek clarification, the ground seems to move beneath
 me. The detailed argument is hard to pin down for these reasons.


Physicalism reduces to computationalism if the physics in the brain is
Turing emulable, and then if you follow Bruno's reasoning in the
UDA computationalism leads to elimination of a primary physical world.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Bruno Marchal


On 16 Apr 2015, at 20:24, meekerdb wrote:


On 4/15/2015 11:16 PM, Bruce Kellett wrote:

LizR wrote:
On 16 April 2015 at 15:37, Bruce Kellett  
bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au  
wrote:


   Bruno has said to me that one cannot refute a scientific  
finding by
   philosophy. One cannot, of course, refute a scientific  
observation

   by philosophy, but one can certainly enter a philosophical
   discussion of the meaning and interpretation of an observation.  
In

   an argument like Bruno's, one can certainly question the
   metaphysical and other presumptions that go into his discourse.


Yes, of course. I don't think anyone is denying that - quite the  
reverse, people who argue /against /Bruno often do so on the basis  
of unexamined metaphysical assumptions (like primary materialism)


And the contrary, that primary materialism is false, is just as  
much an unevidenced metaphysical assumption.


But it's not an assumption in Bruno's argument. As I understand it,  
his theory in outline is:


1. All our thoughts are certain kinds of computations.
2. The physical world is an inference from our thoughts.
3. Computations are abstract relations among mathematical objects.
4. The physical world is instantiated by those computations that  
correspond to intersubjective agreement in our thoughts.
5. Our perceptions/thoughts/beliefs about the world are modeled by  
computed relations between the computed physics


Computed or not.




and our computed thoughts.
6. The UD realizes (in Platonia) all possible computations and so  
realizes the model 1-5 in all possible ways and this produces the  
multiple-worlds of QM, plus perhaps infinitely many other worlds  
which he hopes to show have low measure.


Well, better to talk in term of the continuations. The indeterminacy  
is relative, for the physics. There is another more geographical  
indterminacy, which is more Bayesian, like if there are carbon atoms,  
I have to find myself in a reality with carbon maker (like stars).  
That indeterminacy still requires a notion of normal (Gaussian)  
reality, and thus a solution to the general measure problem.


Rather good summary Brent!

Bruno




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Bruce Kellett

meekerdb wrote:

On 4/15/2015 11:16 PM, Bruce Kellett wrote:

LizR wrote:
On 16 April 2015 at 15:37, Bruce Kellett bhkell...@optusnet.com.au 
mailto:bhkell...@optusnet.com.au wrote:


Bruno has said to me that one cannot refute a scientific finding by
philosophy. One cannot, of course, refute a scientific observation
by philosophy, but one can certainly enter a philosophical
discussion of the meaning and interpretation of an observation. In
an argument like Bruno's, one can certainly question the
metaphysical and other presumptions that go into his discourse.

Yes, of course. I don't think anyone is denying that - quite the 
reverse, people who argue /against /Bruno often do so on the basis of 
unexamined metaphysical assumptions (like primary materialism)


And the contrary, that primary materialism is false, is just as much 
an unevidenced metaphysical assumption.


But it's not an assumption in Bruno's argument. As I understand it, his 
theory in outline is:


1. All our thoughts are certain kinds of computations.
2. The physical world is an inference from our thoughts.
3. Computations are abstract relations among mathematical objects.
4. The physical world is instantiated by those computations that 
correspond to intersubjective agreement in our thoughts.
5. Our perceptions/thoughts/beliefs about the world are modeled by 
computed relations between the computed physics and our computed thoughts.
6. The UD realizes (in Platonia) all possible computations and so 
realizes the model 1-5 in all possible ways and this produces the 
multiple-worlds of QM, plus perhaps infinitely many other worlds which 
he hopes to show have low measure.


I leave it to Bruno to comment on whether this is a fair summary of his 
theory. I have difficulties with several of the points you list. But 
that aside, one has to ask exactly what has bee achieved, even if all of 
this goes through? I do not think it explains consciousness. It seems to 
stem from the idea that consciousness is a certain type of computation 
(that can be emulated in a universal Turing machine, or general purpose 
computer.) This is then developed as a form of idealism (2 above) to 
argue that the physical world and our perceptions, thoughts, and beliefs 
about that world, are also certain types of computations.


But this is no nearer to an explanation of consciousness than the 
alternative model of assuming a primitive physical universe and arguing 
that consciousness supervenes on the physical structure of brains, and 
that mathematics is an inference from our physical experiences. 
Consciousness supervenes on computations? What sort of computation? Why 
on this sort and not any other sort? Similar questions arise in the 
physicalist account of course, but proposing a new theory that does not 
answer any of the questions posed by the original theory does not seem 
like an advance to me. At least physicalism has evolutionary arguments 
open to it as an explanation of consciousness


The physicalist model has the advantage that it gives the physical world 
directly -- physics does not have to be constructed from some abstract 
computations in Platonia (even if such a concept can be given any 
meaning.) If you take the degree of agreement with observation as the 
measure of success of a theory, then physicalism wins hands down. 
Bruno's theory does not currently produce any real physics at all.


The discussion of the detailed steps in the argument Bruno gives is 
merely a search for clarification. As I have said, many things seem open 
to philosophical discussion, and some of Bruno's definitions seem 
self-serving. When I seek clarification, the ground seems to move 
beneath me. The detailed argument is hard to pin down for these reasons.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Bruno Marchal


On 16 Apr 2015, at 19:19, John Clark wrote:


On Thu, Apr 16, 2015 Bruno Marchal marc...@ulb.ac.be wrote:

 the argument is that both copies are equally the same person as  
the original.


 No one assume that.

 John Clark assumes this,

Of course I assume it, it's the only logical conclusion and I assume  
that logic is more likely to find the truth than illogic, although  
Quinton has publicly stated other ideas on that subject.


 I have locally assume it too, but only to refute Clark argument.  
That might explain your confusion,


But it doesn't explain my confusion,


The post was addressed to Bruce.



do you agree that both copies are equally the same person as the  
original or do you not?



I do.
They are the same person in the sense that I am the same person as  
yesterday. So we can say that the W-man and the M-man are both the H- 
man, but put in different cities. That is the reason of the  
indeterminacy lived by the H-man beore he pushes on the button: he  
knows (with the computationalist assumption and the default  
hypotheses) that he will be in both city, but that with a probability  
one he will feel, in both cities, to be in only one city.  The H-man,  
when still in helsinki, can predict that when he will be reconstituted  
in the boxes, he will be unable to know if he will see M, or W, before  
opening the door. But he knows that after the door will be opened, he  
will see only once city. By a simple reasoning, he knows all this in  
advance, so he is aware of that indeterminacy before pushing the button.


Bruno





  John K Clark




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Bruce Kellett

Stathis Papaioannou wrote:



On Friday, April 17, 2015, Bruce Kellett bhkell...@optusnet.com.au 
mailto:bhkell...@optusnet.com.au wrote:


meekerdb wrote:

On 4/15/2015 11:16 PM, Bruce Kellett wrote:

LizR wrote:

On 16 April 2015 at 15:37, Bruce Kellett
bhkell...@optusnet.com.au
mailto:bhkell...@optusnet.com.au wrote:

Bruno has said to me that one cannot refute a
scientific finding by
philosophy. One cannot, of course, refute a
scientific observation
by philosophy, but one can certainly enter a
philosophical
discussion of the meaning and interpretation of an
observation. In
an argument like Bruno's, one can certainly question the
metaphysical and other presumptions that go into his
discourse.

Yes, of course. I don't think anyone is denying that -
quite the reverse, people who argue /against /Bruno
often do so on the basis of unexamined metaphysical
assumptions (like primary materialism)


And the contrary, that primary materialism is false, is just
as much an unevidenced metaphysical assumption.


But it's not an assumption in Bruno's argument. As I understand
it, his theory in outline is:

1. All our thoughts are certain kinds of computations.
2. The physical world is an inference from our thoughts.
3. Computations are abstract relations among mathematical objects.
4. The physical world is instantiated by those computations that
correspond to intersubjective agreement in our thoughts.
5. Our perceptions/thoughts/beliefs about the world are modeled
by computed relations between the computed physics and our
computed thoughts.
6. The UD realizes (in Platonia) all possible computations and
so realizes the model 1-5 in all possible ways and this produces
the multiple-worlds of QM, plus perhaps infinitely many other
worlds which he hopes to show have low measure.


I leave it to Bruno to comment on whether this is a fair summary of
his theory. I have difficulties with several of the points you list.
But that aside, one has to ask exactly what has bee achieved, even
if all of this goes through? I do not think it explains
consciousness. It seems to stem from the idea that consciousness is
a certain type of computation (that can be emulated in a universal
Turing machine, or general purpose computer.) This is then developed
as a form of idealism (2 above) to argue that the physical world and
our perceptions, thoughts, and beliefs about that world, are also
certain types of computations.

But this is no nearer to an explanation of consciousness than the
alternative model of assuming a primitive physical universe and
arguing that consciousness supervenes on the physical structure of
brains, and that mathematics is an inference from our physical
experiences. Consciousness supervenes on computations? What sort of
computation? Why on this sort and not any other sort? Similar
questions arise in the physicalist account of course, but proposing
a new theory that does not answer any of the questions posed by the
original theory does not seem like an advance to me. At least
physicalism has evolutionary arguments open to it as an explanation
of consciousness

The physicalist model has the advantage that it gives the physical
world directly -- physics does not have to be constructed from some
abstract computations in Platonia (even if such a concept can be
given any meaning.) If you take the degree of agreement with
observation as the measure of success of a theory, then physicalism
wins hands down. Bruno's theory does not currently produce any real
physics at all.

The discussion of the detailed steps in the argument Bruno gives is
merely a search for clarification. As I have said, many things seem
open to philosophical discussion, and some of Bruno's definitions
seem self-serving. When I seek clarification, the ground seems to
move beneath me. The detailed argument is hard to pin down for these
reasons.


Physicalism reduces to computationalism if the physics in the brain is 
Turing emulable, and then if you follow Bruno's reasoning in the 
UDA computationalism leads to elimination of a primary physical world.


But physics itself is not Turing emulable. The no-cloning theorem of 
quantum physics precludes it.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Bruno Marchal


On 16 Apr 2015, at 21:18, meekerdb wrote:


On 4/16/2015 1:11 AM, Bruno Marchal wrote:


On 16 Apr 2015, at 02:52, meekerdb wrote:


On 4/15/2015 5:29 PM, LizR wrote:

On 15 April 2015 at 04:40, meekerdb meeke...@verizon.net wrote:
On 4/13/2015 11:31 PM, Quentin Anciaux wrote:


Le 14 avr. 2015 08:04, Stathis Papaioannou  
stath...@gmail.com a écrit :


 Certainly some theories of consciousness might not allow  
copying, but

 that cannot be a logical requirement. To claim that something is
 logically impossible is to claim that it is self-contradictory.

I don't see why a theory saying like I said in the upper  
paragraph that consciousness could not be copied would be  
selfcontradictory... You have to see that when you say  
consciousness is duplicatable, you assume a lot of things about  
the reality and how it is working, and that you're making a  
metaphysical commitment, a leap of faith concerning what you  
assume the real to be and the reality itself. That's all I'm  
saying, but clearly if computationalism is true consciousness is  
obviously duplicatable.


Quentin
In order to say what duplication of consciousness is and whether  
it is non-contradictory you need some propositional definition of  
it.  Not just, an instrospective well everybody knows what it is.


Comp assumes it's an outcome of computational processes, at some  
level. Is that enough to be a propositional definition?


I don't think it's specific enough because it isn't clear whether  
computational process means a physical process or an abstract  
one.  If you take computational process to be the abstract  
process in Platonia then it would not be duplicable;


?

The UD copied the abstract process an infinity of times. It  
might appear in


phi_567_(29)^45, phi_567_(29)^46, phi_567_(29)^47, phi_567_(29)^48,  
phi_567_(29)^49, phi_567_(29)^50, ... and in


phi_8999704_(0)^89,   phi_8999704_(0)^90,  phi_8999704_(0)^91,   
phi_8999704_(0)^92,  phi_8999704_(0)^93,


I don't understand your notation here.  Does phi_i(x) refer to the  
ith function in some list of all functions?


Yes. The computably enumerable (with repetitions) list of the partial  
computable functions. You get one, you choose your favorite universal  
programming language, and order the programs lexicographically. This  
determines a list of the phi_i.




And does the exponent refer to repeated iteration: phi_i(x)^n+1 :=  
phi_i(phi_i(x)^n)?


No. phi_i(x)^n represents the nth first step of the computation.

A universal dovetailer is given by the following program:

FOR ALL x, y, z
compute phi_x(y)^z
END

Here the dovetailing is managed by the infinite FOR ALL.

Bruno










every copy would just be a token of the same process.  I think  
that's what Bruno means.


The consciousness will be the same, but it is multiplied (in some  
3-1 sense) in UD* (sigma_1 truth).


Are you saying that identity of indiscernibles doesn't apply to  
these computations?


Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Bruno Marchal


On 16 Apr 2015, at 21:54, meekerdb wrote:


On 4/16/2015 1:36 AM, Bruno Marchal wrote:
So consciousness is not 1-duplicable, but can be considered as  
having been duplicated in some 3-1-view, even before it diverges.


How can it be considered duplicated before it diverges?


By associating it with different token of the machinery  
implementing it.




Are you assuming consciousness is physical and so having different  
spacetime location can distinguish two otherwise indiscernible set  
of thoughts?


No, it can't, in the 1-view, but it can in the 3-1 view. OK?


I'm not sure what 3-1 view means,


It is the content of the diary of the observer outside the box (like  
in sane04), but on the consciousness of the copies involved. This is  
for the people who say that they will be conscious in W and M. That is  
true, but the pure 1-view is that they will be conscious in only one  
city (even if that happens in both cities).


In the math part, this is captured by [1][0]A, with [0]A = the usual  
bewesibar of Gödel, and [1]A = [0]A  A (Theaetetus).



but if you mean in the sense of running on two different machines  
then I agree.


OK.


That means duplication of consciousness/computation depends on  
distinguishability of the physical substrate with no distinction in  
the consciousness/computation.


OK. Like when the guy has already been multiplied in W and M, but has  
not yet open the door. We assume of course that the two boxes are  
identical from inside, no windows, and the air molecules at the same  
place (for example). That can be made absolutely identical in the step  
6, where the reconstitutions are made in a virtual environment.




  But is that the duplication envisioned in the M-W thought  
experiment?


Yes, at different steps.




I find I'm confused about that.  In our quantum-mechanical world it  
is impossible to duplicate something in an unknown state.  One could  
duplicate a human being in the rough classical sense of structure at  
the molecular composition level, but not the molecular states.  Such  
duplicates would be similar as I'm similar to myself of yesterday -  
but they would instantly diverge in thoughts, even without seeing  
Moscow or Washington.


In practice, yes. Assuming the duplication is done in a real world,  
and assuming QM. But in step six, you can manage the environments to  
be themselves perfectly emulated and 100% identical. That is all what  
is needed for the reasoning.





Yet it seems Bruno's argument is based on deterministic computation


At my substitution level. But this will entail that the real world,  
whatever it can be, is non deterministic. We WM duplicate on all the  
different computations in the UD* (in arithmetic) which go through my  
local current state.



and requires the duplication and subsequent thoughts to be  
duplicates at a deterministic classcial level so that the M-man and  
W-man on diverge in thought when they see different things in their  
respective cities.


Yes. That is why the H-man cannot predict which divergence he will  
live. But sometimes we mention the state of the person before he or  
she open the doors, for example to address a question like would a  
tiny oxygen atom in the box makes a difference in the measure or  
not,  Here there is almost a matter of convention to say that there  
are two or one consciousness. We can ascribe consciousness to the  
different people in the different box, but that is a 3-1 view. The 1- 
views feels to be in once city, and not in the other.


Bruno





Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Bruno Marchal


On 16 Apr 2015, at 23:23, meekerdb wrote:


On 4/16/2015 6:14 AM, Bruno Marchal wrote:


On 16 Apr 2015, at 13:55, Bruce Kellett wrote:


Bruno Marchal wrote:

On 16 Apr 2015, at 01:51, Bruce Kellett wrote:


Yes. I think that Bruno's treatment sometimes lacks  
philosophical sophistication. Computationalism is based on the  
idea that human consciousness is Turing emulable,
This is an acceptable terming for some argument, but at some  
point you might understand that this is not really the case. Comp  
assumes only that we can survive ith a digital brain, in the  
quasi-operational meaning of the yes doctor scenario.  
Consciousness is a first person notion, and that is not Turing  
emulable per se, in fact that is not even definable in any 3p  
term. That is part of the difficulty of the concept.


In COMP(2013) you write:
The digital mechanist thesis (or computationalism, or just  
comp) is then equivalent to the hypothesis that there is a level  
of description of that part of reality in which my consciousness  
remains invariant through a functional digital emulation of that  
generalized brain at that particular level.


I think this is saying that human consciousness is Turing emulable.


Only by using an identity thesis, which later will need to be  
abandonned. My consciousness is in Platonia, out of time and space,  
and might rely on infinities of computations. What the brain does   
consists in making it possible for that consciousness to manifest  
itself.


That seems problematic.  What is a consciousness conscious of when  
it is not manifesting itself?


It means it manifest itself elsewhere. It can be in a dream, like a  
sleepy person, or in a parallel universe, or in heaven or God knows  
what.


It can also be conscious of nothing, like with some powerful amnesia  
drug, like Salvia, which put yourself in the state of a sort of baby  
having not yet live any experience, but this is not needed to get the  
points, so I would prefer not insist on this in this thread, as it  
mention a consciousness state in which I would not have believed  
before trying salvia. We can indeed be conscious, and highly  
conscious, yet without any memory. That is even more spectacular with  
only a dissociative state, where you keep your memory, but stop  
completely to identify yourself with those memory. In that state, you  
get the higher self experience: where your memories, and your body  
appears to be like a window through which the real person you are can  
observe a world, but knows that such meories are just contingent and  
play no part in defining what you are (the Plotinus and mystic notion  
of inner god).




 To whom does it manifest itself when it is manifest?  ISTM it's  
only manifest to itself - which on your theory wouldn't require a  
brain.


You alway need a relative brain to manifest yourself with respect to  
some other universal number (a physical universe, a friend, a  
correspondent on a list, etc.). But the real you need only the  
arithmetical reality, and you can dissociate yourself from your  
infinitely many brains in arithmetic, and get the consciousness state  
of the most elementary virgin (unprogrammed, unexperienced) universal  
numbers, which is common among all living organism.


Here salvia is more amazing than comp, as it suggests intermediate  
realms, where that virgin consciousness can experience heaven or  
hellish sort of dreams. The most amazing thing is that you experience  
or hallucinate that this is your normal state, and that your live here  
was a sort of dream. The feeling of realness is vastly superior than  
the feeling of realness we usually experience in life, and this can be  
frightening for people who believe we can know that we are awake by  
introspection, like with the people who believe that reality is  
WYSIWYG. As a friend of mine said after a salvia experience, you get  
new doubts, new fears, etc.


Bruno





Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Bruce Kellett

Stathis Papaioannou wrote:
On Friday, April 17, 2015, Bruce Kellett bhkell...@optusnet.com.au 
mailto:bhkell...@optusnet.com.au wrote:

Stathis Papaioannou wrote:

Physicalism reduces to computationalism if the physics in the
brain is Turing emulable, and then if you follow Bruno's
reasoning in the UDA computationalism leads to elimination of a
primary physical world.

But physics itself is not Turing emulable. The no-cloning theorem of
quantum physics precludes it.

Do you mean because you can't exactly copy a given physical state? That 
doesn't necessarily mean the physical world as a whole cannot be 
emulated. And if it turns out that physics is continuous rather than 
than discrete you could still come arbitrarily close with digital models 
of the brain; if that was not good enough you would be saying that the 
brain is a machine with components of zero engineering tolerance. 


An exact copy of an unknown quantum state is not possible. It most 
certainly does mean that the physical world as a whole cannot be 
emulated. Quantum mechanics is based on the incommensurability of pairs 
of conjugate variables. Because you cannot measure both the position and 
momentum of a quantum state to arbitrary precision simultaneously, we 
find that there are two complementary descriptions of the physical 
system -- the description in position space and the description in 
momentum space. These are related by Fourier transforms. Any complete 
description of the physical world must take this into account.


If a quantum state could be duplicated, then you could measure position 
exactly on one copy and momentum on the other. Exact values for these 
two variables simultaneously contradicts the basis of quantum mechanics. 
And there are very good arguments for the view that the world is at base 
quantum: the classical picture only emerges from the quantum at some 
coarse-grained level of description. You cannot describe everything that 
happens in the physical world from this classical, coarse-grained 
perspective.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Bruno Marchal


On 17 Apr 2015, at 07:12, Platonist Guitar Cowboy wrote:




On Thu, Apr 16, 2015 at 2:36 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 16 Apr 2015, at 06:34, Platonist Guitar Cowboy wrote:




On Thu, Apr 16, 2015 at 5:33 AM, Bruce Kellett bhkell...@optusnet.com.au 
 wrote:

LizR wrote:
On 16 April 2015 at 12:53, Bruce Kellett bhkell...@optusnet.com.au  
mailto:bhkell...@optusnet.com.au wrote:


LizR wrote:

On 15 April 2015 at 10:15, John Clark johnkcl...@gmail.com
mailto:johnkcl...@gmail.com mailto:johnkcl...@gmail.com
mailto:johnkcl...@gmail.com wrote:

Yes but I'm confused, I though you were the one arguing  
that

Bruno
had discovered something new under the sun, a new sort of
uncertainty That's hardly what Bruno is claiming.  
Step 3 is only a small
step in a logical argument. It shows that if our normal  
everyday

consciousness is the result of computation, then it can be
duplicated (in principle - if you have a problem with matter
duplicators, consider an AI programme) and that this leads to
what looks like uncertainty from one person's perspective.


You only get that impression because in Bruno's treatment of the
case -- the two copies are immediately separated by a large  
distance
and don't further interact. You might come to a different  
conclusion

if you let the copies sit down together and have a chat.

That doesn't make any difference to the argument. Will I be the  
copy sitting in the chair on the left? is less dramatic than Will  
I be transported to Moscow or Washington? and hence, I suspect,  
might not make the point so clearly. But otherwise the argument  
goes through either way.


No, because as I argued elsewhere, the two 'copies' would not agree  
that they were the same person.


Separating them geographically was meant to mimic the different
worlds idea from MWI. But I think that is a bit of a cheat.

I don't know where Bruno says he's mimicking the MWI (at this  
stage) ? This is a classical result, assuming classical computation  
(which according to Max Tegmark is a reasonable assumption for  
brains).


In the protracted arguments with John Clark, the point was repeated  
made that he accepted FPI for MWI, so why not for Step 3.


Discussion or fruitful argument assume mutual respect. The respect/ 
civility in the exchange is one-sided however, and has remained so  
for years. It's not an argument; closer to an experiment of John to  
see how often he can get away with airing personal issues clothed  
in sincerity of intellectual debate.


This occupies too much bandwidth and is a turn off from where I'm  
sitting. I'd much rather see the comp related discussions go to  
address say Telmo's request for clarification in Bruno's use of  
phi_i, or G/G* distinctions, or pedagogical demonstrations on the  
work arithmetic existentially actualizes/gets done, clarification  
on Russell's use of robust, physicalist theories that don't  
eliminate consciousness etc.


Good and interesting questions indeed.

I, of course would be delighted if people try to really grasp the  
phi_i, the G/G* distinction, and the subtle but key point of the  
fact that the arithmetical reality simulates computations, as  
opposed to merely generates descriptions of them.


I am bit buzy right now.  Feel free to tell me which one of those  
point seems to you the more interesting, or funky.


Funkiest would be arithmetical reality simulates computations aka  
free lunch :)


OK, that is important, also. And it is is importantly related to the  
difference between a computation and a description of a computation,  
which is important in step 8, but also for the very meaning of what a  
computation can be.






But I've picked up and guess that people seem to miss use of phi_i  
or Sigma 1 sentences and such terms.


Are you sure? that is mathematics which frighten sometimes people.




So, you thought you could offer me a hand and... I take the arm and  
more: 1 of those point = 3 + infinite possibility of other such  
terms. PGC- Zombie hunting armchair ninja of numbers.


OK. Not today, as my deadline for the paper which has been asked by  
very nice people, is ... today.
But I will create a thread on the first question above. A difficult  
point ...


Liz, it is time to find back your notes, or buy a new diary :)

Don't worry, Liz, I will try to annoy/shake everyone this time ...

Thanks for the suggestion PGC,

Bruno





--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.



Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Stathis Papaioannou
On Friday, April 17, 2015, Bruce Kellett bhkell...@optusnet.com.au wrote:

 Stathis Papaioannou wrote:



 On Friday, April 17, 2015, Bruce Kellett bhkell...@optusnet.com.au
 mailto:bhkell...@optusnet.com.au wrote:

 meekerdb wrote:

 On 4/15/2015 11:16 PM, Bruce Kellett wrote:

 LizR wrote:

 On 16 April 2015 at 15:37, Bruce Kellett
 bhkell...@optusnet.com.au
 mailto:bhkell...@optusnet.com.au wrote:

 Bruno has said to me that one cannot refute a
 scientific finding by
 philosophy. One cannot, of course, refute a
 scientific observation
 by philosophy, but one can certainly enter a
 philosophical
 discussion of the meaning and interpretation of an
 observation. In
 an argument like Bruno's, one can certainly question
 the
 metaphysical and other presumptions that go into his
 discourse.

 Yes, of course. I don't think anyone is denying that -
 quite the reverse, people who argue /against /Bruno
 often do so on the basis of unexamined metaphysical
 assumptions (like primary materialism)


 And the contrary, that primary materialism is false, is just
 as much an unevidenced metaphysical assumption.


 But it's not an assumption in Bruno's argument. As I understand
 it, his theory in outline is:

 1. All our thoughts are certain kinds of computations.
 2. The physical world is an inference from our thoughts.
 3. Computations are abstract relations among mathematical objects.
 4. The physical world is instantiated by those computations that
 correspond to intersubjective agreement in our thoughts.
 5. Our perceptions/thoughts/beliefs about the world are modeled
 by computed relations between the computed physics and our
 computed thoughts.
 6. The UD realizes (in Platonia) all possible computations and
 so realizes the model 1-5 in all possible ways and this produces
 the multiple-worlds of QM, plus perhaps infinitely many other
 worlds which he hopes to show have low measure.


 I leave it to Bruno to comment on whether this is a fair summary of
 his theory. I have difficulties with several of the points you list.
 But that aside, one has to ask exactly what has bee achieved, even
 if all of this goes through? I do not think it explains
 consciousness. It seems to stem from the idea that consciousness is
 a certain type of computation (that can be emulated in a universal
 Turing machine, or general purpose computer.) This is then developed
 as a form of idealism (2 above) to argue that the physical world and
 our perceptions, thoughts, and beliefs about that world, are also
 certain types of computations.

 But this is no nearer to an explanation of consciousness than the
 alternative model of assuming a primitive physical universe and
 arguing that consciousness supervenes on the physical structure of
 brains, and that mathematics is an inference from our physical
 experiences. Consciousness supervenes on computations? What sort of
 computation? Why on this sort and not any other sort? Similar
 questions arise in the physicalist account of course, but proposing
 a new theory that does not answer any of the questions posed by the
 original theory does not seem like an advance to me. At least
 physicalism has evolutionary arguments open to it as an explanation
 of consciousness

 The physicalist model has the advantage that it gives the physical
 world directly -- physics does not have to be constructed from some
 abstract computations in Platonia (even if such a concept can be
 given any meaning.) If you take the degree of agreement with
 observation as the measure of success of a theory, then physicalism
 wins hands down. Bruno's theory does not currently produce any real
 physics at all.

 The discussion of the detailed steps in the argument Bruno gives is
 merely a search for clarification. As I have said, many things seem
 open to philosophical discussion, and some of Bruno's definitions
 seem self-serving. When I seek clarification, the ground seems to
 move beneath me. The detailed argument is hard to pin down for these
 reasons.


 Physicalism reduces to computationalism if the physics in the brain is
 Turing emulable, and then if you follow Bruno's reasoning in the UDA
 computationalism leads to elimination of a primary physical world.


 But physics itself is not Turing emulable. The no-cloning theorem of
 quantum physics precludes it.


Do you mean because you can't exactly copy a given physical 

Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Stathis Papaioannou
On Friday, April 17, 2015, Bruce Kellett bhkell...@optusnet.com.au wrote:

 Stathis Papaioannou wrote:

 On Friday, April 17, 2015, Bruce Kellett bhkell...@optusnet.com.au
 mailto:bhkell...@optusnet.com.au wrote:
 Stathis Papaioannou wrote:

 Physicalism reduces to computationalism if the physics in the
 brain is Turing emulable, and then if you follow Bruno's
 reasoning in the UDA computationalism leads to elimination of a
 primary physical world.

 But physics itself is not Turing emulable. The no-cloning theorem of
 quantum physics precludes it.

 Do you mean because you can't exactly copy a given physical state? That
 doesn't necessarily mean the physical world as a whole cannot be emulated.
 And if it turns out that physics is continuous rather than than discrete
 you could still come arbitrarily close with digital models of the brain; if
 that was not good enough you would be saying that the brain is a machine
 with components of zero engineering tolerance.


 An exact copy of an unknown quantum state is not possible. It most
 certainly does mean that the physical world as a whole cannot be emulated.
 Quantum mechanics is based on the incommensurability of pairs of conjugate
 variables. Because you cannot measure both the position and momentum of a
 quantum state to arbitrary precision simultaneously, we find that there are
 two complementary descriptions of the physical system -- the description in
 position space and the description in momentum space. These are related by
 Fourier transforms. Any complete description of the physical world must
 take this into account.


You can't copy an arbitrary quantum state, but you could copy it by
emulating a series of quantum states.


 If a quantum state could be duplicated, then you could measure position
 exactly on one copy and momentum on the other. Exact values for these two
 variables simultaneously contradicts the basis of quantum mechanics. And
 there are very good arguments for the view that the world is at base
 quantum: the classical picture only emerges from the quantum at some
 coarse-grained level of description. You cannot describe everything that
 happens in the physical world from this classical, coarse-grained
 perspective.


If you had an actual Turing machine and unlimited time, you could by brute
force emulate everything. However, that is not the point. If you car needs
a part replaced, you don't need to get a replacement exactly the same down
to the quantum level. This is the case every machine, and there is no
reason to believe biological machines are different: infinite precision
parts would mean zero robustness.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Bruce Kellett

Stathis Papaioannou wrote:
On Friday, April 17, 2015, Bruce Kellett bhkell...@optusnet.com.au 
mailto:bhkell...@optusnet.com.au wrote:


Stathis Papaioannou wrote:

On Friday, April 17, 2015, Bruce Kellett
bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au
wrote:
Stathis Papaioannou wrote:

Physicalism reduces to computationalism if the physics
in the
brain is Turing emulable, and then if you follow Bruno's
reasoning in the UDA computationalism leads to
elimination of a
primary physical world.

But physics itself is not Turing emulable. The no-cloning
theorem of
quantum physics precludes it.

Do you mean because you can't exactly copy a given physical
state? That doesn't necessarily mean the physical world as a
whole cannot be emulated. And if it turns out that physics is
continuous rather than than discrete you could still come
arbitrarily close with digital models of the brain; if that was
not good enough you would be saying that the brain is a machine
with components of zero engineering tolerance.

An exact copy of an unknown quantum state is not possible. It most
certainly does mean that the physical world as a whole cannot be
emulated. Quantum mechanics is based on the incommensurability of
pairs of conjugate variables. Because you cannot measure both the
position and momentum of a quantum state to arbitrary precision
simultaneously, we find that there are two complementary
descriptions of the physical system -- the description in position
space and the description in momentum space. These are related by
Fourier transforms. Any complete description of the physical world
must take this into account.

You can't copy an arbitrary quantum state, but you could copy it by 
emulating a series of quantum states.


?


If a quantum state could be duplicated, then you could measure
position exactly on one copy and momentum on the other. Exact values
for these two variables simultaneously contradicts the basis of
quantum mechanics. And there are very good arguments for the view
that the world is at base quantum: the classical picture only
emerges from the quantum at some coarse-grained level of
description. You cannot describe everything that happens in the
physical world from this classical, coarse-grained perspective.

If you had an actual Turing machine and unlimited time, you could by 
brute force emulate everything. However, that is not the point. If you 
car needs a part replaced, you don't need to get a replacement exactly 
the same down to the quantum level. This is the case every machine, and 
there is no reason to believe biological machines are different: 
infinite precision parts would mean zero robustness.


I think you miss the point. If you want to emulate a car or a biological 
machine, then some classical level of exactness would suffice. But the 
issue is the wider program that wants to see the physical world in all 
its detail emerge from the digital computations of the dovetailer. If 
that is your goal, then you need to emulate the finest details of 
quantum mechanics. This latter is not possible on a Turing machine 
because of the theorem forbidding the cloning of a quantum state. 
Quantum mechanics is, after all, part of the physical world we observe.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Stathis Papaioannou
On Friday, April 17, 2015, Bruce Kellett bhkell...@optusnet.com.au wrote:

 Stathis Papaioannou wrote:

 On Friday, April 17, 2015, Bruce Kellett bhkell...@optusnet.com.au
 mailto:bhkell...@optusnet.com.au wrote:

 Stathis Papaioannou wrote:

 On Friday, April 17, 2015, Bruce Kellett
 bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au
 wrote:
 Stathis Papaioannou wrote:

 Physicalism reduces to computationalism if the physics
 in the
 brain is Turing emulable, and then if you follow Bruno's
 reasoning in the UDA computationalism leads to
 elimination of a
 primary physical world.

 But physics itself is not Turing emulable. The no-cloning
 theorem of
 quantum physics precludes it.

 Do you mean because you can't exactly copy a given physical
 state? That doesn't necessarily mean the physical world as a
 whole cannot be emulated. And if it turns out that physics is
 continuous rather than than discrete you could still come
 arbitrarily close with digital models of the brain; if that was
 not good enough you would be saying that the brain is a machine
 with components of zero engineering tolerance.

 An exact copy of an unknown quantum state is not possible. It most
 certainly does mean that the physical world as a whole cannot be
 emulated. Quantum mechanics is based on the incommensurability of
 pairs of conjugate variables. Because you cannot measure both the
 position and momentum of a quantum state to arbitrary precision
 simultaneously, we find that there are two complementary
 descriptions of the physical system -- the description in position
 space and the description in momentum space. These are related by
 Fourier transforms. Any complete description of the physical world
 must take this into account.

 You can't copy an arbitrary quantum state, but you could copy it by
 emulating a series of quantum states.


 ?

  If a quantum state could be duplicated, then you could measure
 position exactly on one copy and momentum on the other. Exact values
 for these two variables simultaneously contradicts the basis of
 quantum mechanics. And there are very good arguments for the view
 that the world is at base quantum: the classical picture only
 emerges from the quantum at some coarse-grained level of
 description. You cannot describe everything that happens in the
 physical world from this classical, coarse-grained perspective.

 If you had an actual Turing machine and unlimited time, you could by
 brute force emulate everything. However, that is not the point. If you car
 needs a part replaced, you don't need to get a replacement exactly the same
 down to the quantum level. This is the case every machine, and there is no
 reason to believe biological machines are different: infinite precision
 parts would mean zero robustness.


 I think you miss the point. If you want to emulate a car or a biological
 machine, then some classical level of exactness would suffice. But the
 issue is the wider program that wants to see the physical world in all its
 detail emerge from the digital computations of the dovetailer. If that is
 your goal, then you need to emulate the finest details of quantum
 mechanics. This latter is not possible on a Turing machine because of the
 theorem forbidding the cloning of a quantum state. Quantum mechanics is,
 after all, part of the physical world we observe.


Although you cannot clone an arbitrary quantum state, you might be able
to simulate the set of all possible quantum states of which the unknown
state is one. No cloning is needed.

However, for Bruno's argument this level of simulation is not needed. What
is needed is a level of simulation sufficient to reproduce consciousness,
and this may be well above the quantum level.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread John Clark
On Thu, Apr 16, 2015  meekerdb meeke...@verizon.net wrote:

  Even if the laws of physics are deterministic and time-symmetric
  events would still not be time symmetrical if the initial condition was a
 state of minimum entropy because then any change in that state would lead
 to a increase in entropy, and the arrow of time would be born.


  In a deterministic, time-symmetric system  there is no information loss
 with evolution either to the past or future.


True there is no information loss, in fact there is a information increase
and thus a entropy increase because it would take more information to
describe the new more complex higher entropy state than the previous
simpler state.

Well OK I've over simplified a bit, when entropy gets high enough it
actually takes less information to describe it, although the present
universe is nowhere near that point yet. Maximum information is about
midway between maximum and minimum entropy. Put some cream in a glass
coffee cup and then very carefully put some coffee on top of it. For a
short time the 2 fluids will remain segregated and the entropy will be low
and the information needed to describe it would be low too, but then
tendrils of cream will start to move into the coffee and all sorts of
spirals and other complex patterns will form, the entropy is higher now and
the information needed to describe it is higher, but after that the fluid
in the cup will reach a dull uniform color that is darker than coffee but
lighter than cream, the entropy has reached a maximum but it would take
less information to describe it. Another example is smoke from a cigarette
in a room with no air currents, it starts out as a simple smooth laminar
flow but then turbulence kicks in and very complex patterns form, and after
that it diffuses into uniform featureless fog.


  So the entropy is zero and stays zero


That doesn't follow. If you knew all the information in the present state
(but both Many Worlds and Copenhagen agree that can never happen) you could
calculate from that the initial conditions of the original very low entropy
state, but calculations are physical and calculations take energy give off
heat and thus increase entropy. Yes you could use reversible computing and
reduce the energy needed to perform a calculation to an arbitrarily low
figure, but the less energy you use the slower the calculation is, so by
the time you've finished the calculation about how to put things back to
their original simple state the universe has kept on evolving and is now in
a new much more complex state than when you started. So you'd have to start
all over again.

But all that is just hypothetical because although they think so for
different reasons both Many Worlds and Copenhagen agree that you can never
have complete information even in theory.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Bruno Marchal


On 17 Apr 2015, at 18:55, Stathis Papaioannou wrote:




On Friday, April 17, 2015, Bruce Kellett bhkell...@optusnet.com.au  
wrote:

Stathis Papaioannou wrote:
On Friday, April 17, 2015, Bruce Kellett bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au 
 wrote:


Stathis Papaioannou wrote:

On Friday, April 17, 2015, Bruce Kellett
bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au
wrote:
Stathis Papaioannou wrote:

Physicalism reduces to computationalism if the physics
in the
brain is Turing emulable, and then if you follow  
Bruno's

reasoning in the UDA computationalism leads to
elimination of a
primary physical world.

But physics itself is not Turing emulable. The no-cloning
theorem of
quantum physics precludes it.

Do you mean because you can't exactly copy a given physical
state? That doesn't necessarily mean the physical world as a
whole cannot be emulated. And if it turns out that physics is
continuous rather than than discrete you could still come
arbitrarily close with digital models of the brain; if that  
was
not good enough you would be saying that the brain is a  
machine

with components of zero engineering tolerance.

An exact copy of an unknown quantum state is not possible. It most
certainly does mean that the physical world as a whole cannot be
emulated. Quantum mechanics is based on the incommensurability of
pairs of conjugate variables. Because you cannot measure both the
position and momentum of a quantum state to arbitrary precision
simultaneously, we find that there are two complementary
descriptions of the physical system -- the description in position
space and the description in momentum space. These are related by
Fourier transforms. Any complete description of the physical world
must take this into account.

You can't copy an arbitrary quantum state, but you could copy it by  
emulating a series of quantum states.


?

If a quantum state could be duplicated, then you could measure
position exactly on one copy and momentum on the other. Exact  
values

for these two variables simultaneously contradicts the basis of
quantum mechanics. And there are very good arguments for the view
that the world is at base quantum: the classical picture only
emerges from the quantum at some coarse-grained level of
description. You cannot describe everything that happens in the
physical world from this classical, coarse-grained perspective.

If you had an actual Turing machine and unlimited time, you could by  
brute force emulate everything. However, that is not the point. If  
you car needs a part replaced, you don't need to get a replacement  
exactly the same down to the quantum level. This is the case every  
machine, and there is no reason to believe biological machines are  
different: infinite precision parts would mean zero robustness.


I think you miss the point. If you want to emulate a car or a  
biological machine, then some classical level of exactness would  
suffice. But the issue is the wider program that wants to see the  
physical world in all its detail emerge from the digital  
computations of the dovetailer. If that is your goal, then you need  
to emulate the finest details of quantum mechanics. This latter is  
not possible on a Turing machine because of the theorem forbidding  
the cloning of a quantum state. Quantum mechanics is, after all,  
part of the physical world we observe.


Although you cannot clone an arbitrary quantum state, you might be  
able to simulate the set of all possible quantum states of which the  
unknown state is one. No cloning is needed.


However, for Bruno's argument this level of simulation is not  
needed. What is needed is a level of simulation sufficient to  
reproduce consciousness, and this may be well above the quantum level.


That is certainly needed for the first six steps, but at step seven,  
we can relax comp up to the quantum level, and below. The UD emulates  
all programs, including all quantum computer, because the quantum  
computer are Turing emulable, sure with an exponential slow down, but  
the UD does not care, as, in arithmetic, it has all the time.


In fact, as I said to Bruce, at step seven, we can understand why  
matter cannot be duplicated exactly, because matter, in term of  
computation, is the result of the FPI on the whole work of the UD.  
Below your substitution level, you cannot entangle yourself with token  
facts, as they are not relevant for your most probable computational  
history, so you multiply yourself more and more on the details.  
Eventually, to get all the decimal exact, you need to run the entire  
dovetailing, which is impossible.


Bruno




--
Stathis Papaioannou

--
You received this message 

Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Bruno Marchal


On 17 Apr 2015, at 14:35, Bruce Kellett wrote:


Stathis Papaioannou wrote:
On Friday, April 17, 2015, Bruce Kellett bhkell...@optusnet.com.au  
mailto:bhkell...@optusnet.com.au wrote:

   Stathis Papaioannou wrote:
   On Friday, April 17, 2015, Bruce Kellett
   bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au
   wrote:
   Stathis Papaioannou wrote:
   Physicalism reduces to computationalism if the physics
   in the
   brain is Turing emulable, and then if you follow  
Bruno's

   reasoning in the UDA computationalism leads to
   elimination of a
   primary physical world.
   But physics itself is not Turing emulable. The no-cloning
   theorem of
   quantum physics precludes it.
   Do you mean because you can't exactly copy a given physical
   state? That doesn't necessarily mean the physical world as a
   whole cannot be emulated. And if it turns out that physics is
   continuous rather than than discrete you could still come
   arbitrarily close with digital models of the brain; if that  
was
   not good enough you would be saying that the brain is a  
machine

   with components of zero engineering tolerance.
   An exact copy of an unknown quantum state is not possible. It most
   certainly does mean that the physical world as a whole cannot be
   emulated. Quantum mechanics is based on the incommensurability of
   pairs of conjugate variables. Because you cannot measure both the
   position and momentum of a quantum state to arbitrary precision
   simultaneously, we find that there are two complementary
   descriptions of the physical system -- the description in position
   space and the description in momentum space. These are related by
   Fourier transforms. Any complete description of the physical world
   must take this into account.
You can't copy an arbitrary quantum state, but you could copy it by  
emulating a series of quantum states.


?


   If a quantum state could be duplicated, then you could measure
   position exactly on one copy and momentum on the other. Exact  
values

   for these two variables simultaneously contradicts the basis of
   quantum mechanics. And there are very good arguments for the view
   that the world is at base quantum: the classical picture only
   emerges from the quantum at some coarse-grained level of
   description. You cannot describe everything that happens in the
   physical world from this classical, coarse-grained perspective.
If you had an actual Turing machine and unlimited time, you could  
by brute force emulate everything. However, that is not the point.  
If you car needs a part replaced, you don't need to get a  
replacement exactly the same down to the quantum level. This is the  
case every machine, and there is no reason to believe biological  
machines are different: infinite precision parts would mean zero  
robustness.


I think you miss the point. If you want to emulate a car or a  
biological machine, then some classical level of exactness would  
suffice. But the issue is the wider program that wants to see the  
physical world in all its detail emerge from the digital  
computations of the dovetailer.


By the FPI on all computations. This will be a priori not computable.  
That the universe looks some much predictable is the mystery with  
comp. We must fight the white rabbits away.





If that is your goal,


The result is that we have to do that if we assume computationalism in  
the cognitive science.




then you need to emulate the finest details of quantum mechanics.


Comp explains why this is impossible. The finest details of physics  
are given by sum on many computations, the finer the details, the more  
there are. To get the numbers right up to infinite decimals, you need  
to run the entire dovetailer in a finite time. We can't do that.



This latter is not possible on a Turing machine because of the  
theorem forbidding the cloning of a quantum state.


Same with comp.


Quantum mechanics is, after all, part of the physical world we  
observe.


It might be part of the reality we live, but it might be explained by  
the arithmetical FPI on the computations seen from inside. IF QM is  
correct, and if comp is correct, QM has to be a theorem in comp, that  
is, the logic of []p  t have to give a quantization on the sigma_1  
arithmetical sentences. And that is the case.


([]p is Gödel's beweisbar(x), meaning provable(x), and t is the dual  
~beweisbar('~(1=1)').


Don't confuse Digital physics (the universe is a machine) and comp (my  
body/brain is a machine), as they are incompatible (and as Digital  
physics entails comp, but comp entails ~Digital-physics, so digital  
physics entails ~digital physics, so digital physics is self- 
contradictory. With, or without comp, we are confronted to something  
non Turing emulable. No need to go outside arithmetic, as we know  
since Gödel, 

Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Bruno Marchal


On 17 Apr 2015, at 08:26, Bruce Kellett wrote:


meekerdb wrote:

On 4/15/2015 11:16 PM, Bruce Kellett wrote:

LizR wrote:
On 16 April 2015 at 15:37, Bruce Kellett  
bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au  
wrote:


   Bruno has said to me that one cannot refute a scientific  
finding by
   philosophy. One cannot, of course, refute a scientific  
observation

   by philosophy, but one can certainly enter a philosophical
   discussion of the meaning and interpretation of an  
observation. In

   an argument like Bruno's, one can certainly question the
   metaphysical and other presumptions that go into his discourse.

Yes, of course. I don't think anyone is denying that - quite the  
reverse, people who argue /against /Bruno often do so on the  
basis of unexamined metaphysical assumptions (like primary  
materialism)


And the contrary, that primary materialism is false, is just as  
much an unevidenced metaphysical assumption.
But it's not an assumption in Bruno's argument. As I understand it,  
his theory in outline is:

1. All our thoughts are certain kinds of computations.
2. The physical world is an inference from our thoughts.
3. Computations are abstract relations among mathematical objects.
4. The physical world is instantiated by those computations that  
correspond to intersubjective agreement in our thoughts.
5. Our perceptions/thoughts/beliefs about the world are modeled by  
computed relations between the computed physics and our computed  
thoughts.
6. The UD realizes (in Platonia) all possible computations and so  
realizes the model 1-5 in all possible ways and this produces the  
multiple-worlds of QM, plus perhaps infinitely many other worlds  
which he hopes to show have low measure.


I leave it to Bruno to comment on whether this is a fair summary of  
his theory.


It is not a theory. It is an argument. it is dangerous to sum it by  
thought = computation . The only axiom is that consciousness is  
locally invariant for a digital substitution made at some level. It is  
a very weak version of Descartes Mechanism. It implies all form of  
mechanism and computationalism studied in the literature. It is my  
theory if you want, but my theory is believed by basically all  
rationalists by default. Only precise and rare people, usually  
philosophers, but also some scientists, like Penrose, defends  
different theory.
What makes it stronger than the STRONG AI thesis, is that it is  
supposed to apply to us.
What makes it weaker than most computationalist thesis, is that there  
is no bound delimited for the substitution level.


Then, I argue that this leads to the fact that all first order  
specification of any universal machine/program/number gives a TOE. In  
particular the laws of physics have to de derived in any of those  
TOEs. It gives actually much more and the whole stuff I like to call  
it theology, because it is arguably isomorphic to Proclus theology,  
and Plotinus, Plato. But all this are in the results. The theory is  
only that I am Turing emulable. Even if the brain is a quantum  
computer (which I doubt), I remain Turing emulable, (see the paper of  
Deutsch).







I have difficulties with several of the points you list. But that  
aside, one has to ask exactly what has bee achieved, even if all of  
this goes through?


Everyone knows that Aristotle physics has been refuted. Already by  
Galilee.
The achievement here is a refutation of Aristotle's theology, in  
computationalist frame (the one believed usually by materialist,  
atheists, but also many religious people).






I do not think it explains consciousness.


That was not the goal. But yet, I can argue that 99% of the conceptual  
problem is solved, and that the remaining 1% is simply unsolvable. But  
for the origin of matter appearances, the explanation is conceptually  
100% solved. In that frame, and assuming it true, as the result is  
also that this can be tested.




It seems to stem from the idea that consciousness is a certain type  
of computation (that can be emulated in a universal Turing machine,  
or general purpose computer.)


Not really. Consciousness is 1p, and it the math explains why  
consciousness, like truth, are not definable in arithmetic, unlike  
computations. In fact consciousness is not definable in any third  
person way.


It certainly does not ring right, that consciousness would be a  
computation, and already the FPI suggests that consciousness is  
related to infinities of computations, and in the meaning or semantic  
of those computation, which the machine are unable to define entirely  
by themselves.





This is then developed as a form of idealism (2 above) to argue that  
the physical world and our perceptions, thoughts, and beliefs about  
that world, are also certain types of computations.


Not at all. It is just that if your brain is Turing emulable, it is  
Turing emulated infinitely often in arithmetic (in a tiny part of the  
standard 

Re: Michael Graziano's theory of consciousness

2015-04-17 Thread meekerdb

On 4/17/2015 5:35 AM, Bruce Kellett wrote:
If you had an actual Turing machine and unlimited time, you could by brute force 
emulate everything. However, that is not the point. If you car needs a part replaced, 
you don't need to get a replacement exactly the same down to the quantum level. This is 
the case every machine, and there is no reason to believe biological machines are 
different: infinite precision parts would mean zero robustness.


I think you miss the point. If you want to emulate a car or a biological machine, then 
some classical level of exactness would suffice. But the issue is the wider program that 
wants to see the physical world in all its detail emerge from the digital computations 
of the dovetailer. If that is your goal, then you need to emulate the finest details of 
quantum mechanics. This latter is not possible on a Turing machine because of the 
theorem forbidding the cloning of a quantum state. Quantum mechanics is, after all, part 
of the physical world we observe.


But the goal is not to emulate an existing physical world, it's to instantiate a physical 
world as a computation.  There's no requirement to measure a quantum state and reproduce it.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread meekerdb

On 4/17/2015 12:38 PM, Bruno Marchal wrote:


Comp explains why this is impossible. The finest details of physics are given by sum on 
many computations, the finer the details, the more there are. To get the numbers right 
up to infinite decimals, you need to run the entire dovetailer in a finite time. We 
can't do that. 


?? The UD runs in Platonia, so what does a finite time refer to?

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread meekerdb

On 4/17/2015 12:35 AM, Stathis Papaioannou wrote:


But physics itself is not Turing emulable. The no-cloning theorem of 
quantum physics
precludes it.


Do you mean because you can't exactly copy a given physical state? That doesn't 
necessarily mean the physical world as a whole cannot be emulated. And if it turns out 
that physics is continuous rather than than discrete you could still come arbitrarily 
close with digital models of the brain; if that was not good enough you would be saying 
that the brain is a machine with components of zero engineering tolerance.


The no-cloning theorem doesn't say you can't produce a copy, it says you can't get the 
information in order to know what the copy should be.  You could make a copy by accident, 
by guess, but you couldn't know it was a correct copy.  It doesn't have anything to do 
with discrete vs continuous.


If consciousness depends on quantum level states then Bruno's duplication machine will 
necessarily introduce a gap or discontinuity in consciousness - but then so does a 
concussion.  And there are good reasons (c.f. Tegmark) to think that, even on a 
supervenience theory, consciousness is a classical phenomenon.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread meekerdb

On 4/17/2015 12:37 AM, Bruno Marchal wrote:


On 16 Apr 2015, at 21:54, meekerdb wrote:


On 4/16/2015 1:36 AM, Bruno Marchal wrote:
So consciousness is not 1-duplicable, but can be considered as having been 
duplicated in some 3-1-view, even before it diverges.


How can it be considered duplicated before it diverges?


By associating it with different token of the machinery implementing it.



Are you assuming consciousness is physical and so having different spacetime location 
can distinguish two otherwise indiscernible set of thoughts?


No, it can't, in the 1-view, but it can in the 3-1 view. OK?


I'm not sure what 3-1 view means,


It is the content of the diary of the observer outside the box (like in sane04), but on 
the consciousness of the copies involved. This is for the people who say that they will 
be conscious in W and M. That is true, but the pure 1-view is that they will be 
conscious in only one city (even if that happens in both cities).


In the math part, this is captured by [1][0]A, with [0]A = the usual bewesibar of Gödel, 
and [1]A = [0]A  A (Theaetetus).




but if you mean in the sense of running on two different machines then I agree.


OK.


That means duplication of consciousness/computation depends on distinguishability of 
the physical substrate with no distinction in the consciousness/computation.


OK. Like when the guy has already been multiplied in W and M, but has not yet open the 
door. We assume of course that the two boxes are identical from inside, no windows, and 
the air molecules at the same place (for example). That can be made absolutely identical 
in the step 6, where the reconstitutions are made in a virtual environment.





  But is that the duplication envisioned in the M-W thought experiment?


Yes, at different steps.




I find I'm confused about that.  In our quantum-mechanical world it is impossible to 
duplicate something in an unknown state.  One could duplicate a human being in the 
rough classical sense of structure at the molecular composition level, but not the 
molecular states.  Such duplicates would be similar as I'm similar to myself of 
yesterday - but they would instantly diverge in thoughts, even without seeing Moscow or 
Washington.


In practice, yes. Assuming the duplication is done in a real world, and assuming QM. But 
in step six, you can manage the environments to be themselves perfectly emulated and 
100% identical. That is all what is needed for the reasoning.





Yet it seems Bruno's argument is based on deterministic computation


At my substitution level. But this will entail that the real world, whatever it can be, 
is non deterministic. We WM duplicate on all the different computations in the UD* (in 
arithmetic) which go through my local current state.



and requires the duplication and subsequent thoughts to be duplicates at a 
deterministic classcial level so that the M-man and W-man on diverge in thought when 
they see different things in their respective cities.


Yes. That is why the H-man cannot predict which divergence he will live. But sometimes 
we mention the state of the person before he or she open the doors, for example to 
address a question like would a tiny oxygen atom in the box makes a difference in the 
measure or not,  Here there is almost a matter of convention to say that there are two 
or one consciousness. We can ascribe consciousness to the different people in the 
different box, but that is a 3-1 view. The 1-views feels to be in once city, and not in 
the other.


No, things like radioactive decay of K40 atoms is the blood will very quickly cause the 
W-man and M-man to diverge no matter how precisely the duplicate recievers are made.  But 
I'm not sure why this would matter to your argument?  Is it important to the argument that 
they diverge *only* because of a difference in perception?


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-17 Thread Stathis Papaioannou
On Saturday, April 18, 2015, meekerdb meeke...@verizon.net wrote:

  On 4/17/2015 12:35 AM, Stathis Papaioannou wrote:

 But physics itself is not Turing emulable. The no-cloning theorem of
 quantum physics precludes it.


  Do you mean because you can't exactly copy a given physical state? That
 doesn't necessarily mean the physical world as a whole cannot be emulated.
 And if it turns out that physics is continuous rather than than discrete
 you could still come arbitrarily close with digital models of the brain; if
 that was not good enough you would be saying that the brain is a machine
 with components of zero engineering tolerance.


 The no-cloning theorem doesn't say you can't produce a copy, it says you
 can't get the information in order to know what the copy should be.  You
 could make a copy by accident, by guess, but you couldn't know it was a
 correct copy.  It doesn't have anything to do with discrete vs continuous.


Yes, that's what I meant. You might not be able to copy a quantum state but
you could create it by creating every possible quantum state. Analogously,
you might not be able to copy a classical system due to chaotic effects but
you could make a similar chaitic system. The difficulty of copying a brain
exactly is sometimes raised as an argument against computationalism but
this is due to a misapprehension.


 If consciousness depends on quantum level states then Bruno's duplication
 machine will necessarily introduce a gap or discontinuity in consciousness
 - but then so does a concussion.  And there are good reasons (c.f. Tegmark)
 to think that, even on a supervenience theory, consciousness is a classical
 phenomenon.


And the same consideration applies for classical copying.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-16 Thread Bruce Kellett

LizR wrote:
On 16 April 2015 at 15:37, Bruce Kellett bhkell...@optusnet.com.au 
mailto:bhkell...@optusnet.com.au wrote:


Bruno has said to me that one cannot refute a scientific finding by
philosophy. One cannot, of course, refute a scientific observation
by philosophy, but one can certainly enter a philosophical
discussion of the meaning and interpretation of an observation. In
an argument like Bruno's, one can certainly question the
metaphysical and other presumptions that go into his discourse.


Yes, of course. I don't think anyone is denying that - quite the 
reverse, people who argue /against /Bruno often do so on the basis of 
unexamined metaphysical assumptions (like primary materialism)


And the contrary, that primary materialism is false, is just as much an 
unevidenced metaphysical assumption.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-16 Thread Bruce Kellett

Telmo Menezes wrote:
On Thu, Apr 16, 2015 at 2:53 AM, Bruce Kellett 
bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au wrote:


LizR wrote:

On 15 April 2015 at 10:15, John Clark johnkcl...@gmail.com
mailto:johnkcl...@gmail.com mailto:johnkcl...@gmail.com
mailto:johnkcl...@gmail.com wrote:

Yes but I'm confused, I though you were the one arguing that
Bruno
had discovered something new under the sun, a new sort of
uncertainty 
That's hardly what Bruno is claiming. Step 3 is only a small

step in a logical argument. It shows that if our normal everyday
consciousness is the result of computation, then it can be
duplicated (in principle - if you have a problem with matter
duplicators, consider an AI programme) and that this leads to
what looks like uncertainty from one person's perspective.


You only get that impression because in Bruno's treatment of the
case -- the two copies are immediately separated by a large distance
and don't further interact. You might come to a different conclusion
if you let the copies sit down together and have a chat.

The conclusion of the UDA is that comp and materialism are incompatible. 
Can you formulate a protocol where the copies sit down for a chat and 
arrive at a contradiction of the UDA's conclusion?
 
Separating them geographically was meant to mimic the different

worlds idea from MWI. But I think that is a bit of a cheat.

It's just a simple way to label the two duplicates: Moscow man and 
Washington man. You could have the two reconstructions in the same room 
and label them as machine-A man and machine-B man and let them interact 
immediately. It wouldn't change the conclusion, because the conclusion 
does not depend on the copies having a chat or not. It would just make 
the argument harder to follow.


No, the argument is that both copies are equally the same person as the 
original. It is that illusion that is hard to maintain if they have a 
chat and realize that they are different people. The real issue is 
personal identity through time, and in the case of ties for closest 
follower, as in this case, it fits better with the notions of personal 
identity to say that the copies are both new persons -- inheriting a lot 
from the original of course, but the original single person has not 
become two of the *same* person.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-16 Thread LizR
On 15 April 2015 at 19:58, Quentin Anciaux allco...@gmail.com wrote:


 Bruno, I can go back as far as 2008 for such discussions with John Clark
 in my gmail archives about step 3... it's useless to continue to answer him
 (at least on your work, and surely on anything else), he will never accept
 anything, and will never go beyond that point, he doesn't want to have a
 genuine discussion... it will go back in circle again, he will mock your
 acronyms, he will say, he doesn't know what step 1,2 are, he will do biased
 comparisons, he will say it's stupid, or false or stupid again etc etc
 etc... you give him hours of your live that he doesn't deserve...

 Jeez, I had no idea. I'd have given up long ago if I was him...




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-16 Thread LizR
On 16 April 2015 at 18:16, Bruce Kellett bhkell...@optusnet.com.au wrote:

 LizR wrote:

 On 16 April 2015 at 15:37, Bruce Kellett bhkell...@optusnet.com.au
 mailto:bhkell...@optusnet.com.au wrote:

 Bruno has said to me that one cannot refute a scientific finding by
 philosophy. One cannot, of course, refute a scientific observation
 by philosophy, but one can certainly enter a philosophical
 discussion of the meaning and interpretation of an observation. In
 an argument like Bruno's, one can certainly question the
 metaphysical and other presumptions that go into his discourse.


 Yes, of course. I don't think anyone is denying that - quite the reverse,
 people who argue /against /Bruno often do so on the basis of unexamined
 metaphysical assumptions (like primary materialism)


 And the contrary, that primary materialism is false, is just as much an
 unevidenced metaphysical assumption.


Of course it would be, but no one is assuming that.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-16 Thread Telmo Menezes
On Thu, Apr 16, 2015 at 2:53 AM, Bruce Kellett bhkell...@optusnet.com.au
wrote:

 LizR wrote:

 On 15 April 2015 at 10:15, John Clark johnkcl...@gmail.com mailto:
 johnkcl...@gmail.com wrote:

 Yes but I'm confused, I though you were the one arguing that Bruno
 had discovered something new under the sun, a new sort of
 uncertainty
 That's hardly what Bruno is claiming. Step 3 is only a small step in a
 logical argument. It shows that if our normal everyday consciousness is the
 result of computation, then it can be duplicated (in principle - if you
 have a problem with matter duplicators, consider an AI programme) and that
 this leads to what looks like uncertainty from one person's perspective.


 You only get that impression because in Bruno's treatment of the case --
 the two copies are immediately separated by a large distance and don't
 further interact. You might come to a different conclusion if you let the
 copies sit down together and have a chat.


The conclusion of the UDA is that comp and materialism are incompatible.
Can you formulate a protocol where the copies sit down for a chat and
arrive at a contradiction of the UDA's conclusion?


 Separating them geographically was meant to mimic the different worlds
 idea from MWI. But I think that is a bit of a cheat.


It's just a simple way to label the two duplicates: Moscow man and
Washington man. You could have the two reconstructions in the same room and
label them as machine-A man and machine-B man and let them interact
immediately. It wouldn't change the conclusion, because the conclusion does
not depend on the copies having a chat or not. It would just make the
argument harder to follow.




 Bruce


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-16 Thread Bruno Marchal


On 16 Apr 2015, at 05:33, Bruce Kellett wrote:


LizR wrote:
On 16 April 2015 at 12:53, Bruce Kellett bhkell...@optusnet.com.au  
mailto:bhkell...@optusnet.com.au wrote:

   LizR wrote:
   On 15 April 2015 at 10:15, John Clark johnkcl...@gmail.com
   mailto:johnkcl...@gmail.com mailto:johnkcl...@gmail.com
   mailto:johnkcl...@gmail.com wrote:
   Yes but I'm confused, I though you were the one arguing  
that

   Bruno
   had discovered something new under the sun, a new sort of
   uncertainty That's hardly what Bruno is claiming.  
Step 3 is only a small
   step in a logical argument. It shows that if our normal  
everyday

   consciousness is the result of computation, then it can be
   duplicated (in principle - if you have a problem with matter
   duplicators, consider an AI programme) and that this leads to
   what looks like uncertainty from one person's perspective.
   You only get that impression because in Bruno's treatment of the
   case -- the two copies are immediately separated by a large  
distance
   and don't further interact. You might come to a different  
conclusion

   if you let the copies sit down together and have a chat.
That doesn't make any difference to the argument. Will I be the  
copy sitting in the chair on the left? is less dramatic than Will  
I be transported to Moscow or Washington? and hence, I suspect,  
might not make the point so clearly. But otherwise the argument  
goes through either way.


No, because as I argued elsewhere, the two 'copies' would not agree  
that they were the same person.



   Separating them geographically was meant to mimic the different
   worlds idea from MWI. But I think that is a bit of a cheat.
I don't know where Bruno says he's mimicking the MWI (at this  
stage) ? This is a classical result, assuming classical computation  
(which according to Max Tegmark is a reasonable assumption for  
brains).


In the protracted arguments with John Clark, the point was repeated  
made that he accepted FPI for MWI, so why not for Step 3. Step 3 is  
basically to introduce the idea of FPI, and hence form a link with  
the MWI of quantum mechanics. This may not always have been made  
explicit, but the intention is clear.


It is not made at all. people who criticize UDA always criticize what  
they add themselves to the reasoning. This is not valid. People who  
does that criticize only themselves, not the argument presented.



Step 3 does not succeed in this because the inference to FPI depends  
on a flawed concept of personal identity.



Step 3 leads to the FPI, and to see what happens next, there is step  
4, 5, 6, 7 and 8. Then the translation in arithmetic show how to  
already extract the logic of the observable so that we might refute a  
form of comp (based on comp + the classical theory of knowledge). That  
main point there is that incompleteness refutes Socrates argument  
against the Theaetetus, and we can almost directly retrieve the  
Parmenides-Plotinus theology in the discourse of the introspecting  
universal (Löbian) machine.


Bruno




Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-16 Thread Bruno Marchal


On 15 Apr 2015, at 09:58, Quentin Anciaux wrote:




2015-04-15 9:35 GMT+02:00 Bruno Marchal marc...@ulb.ac.be:

On 15 Apr 2015, at 00:15, John Clark wrote:


On Tue, Apr 14, 2015  Telmo Menezes te...@telmomenezes.com wrote:

 I predict that I will win 1 million dollar by tomorrow. I know my  
prediction is correct because this will happen in one of the  
branches of the multiverse. Do you agree with this statement?


No I do not agree because matter duplicating machines do not exist  
yet so if I check tomorrow the laws of physics will allow me to  
find only one chunk of matter that fits the description of Mr. I  
(that is a chunk of matter that behaves in a  Telmomenezesian way),  
and that particular chunk of matter does not appear to have a  
million dollars. However if the prediction was tomorrow Telmo  
Menezes will win a million dollars then I would agree, provided of  
course that the Many Worlds interpretation of Quantum Mechanics is  
true.


 You are trying to play a game that is absurd, which is to deny  
the first person view.


That is ridiculous, only a fool would deny the first person view  
and John Clark is not a fool. Mr.I can always look to the past and  
see one unique linear sequence of Mr.I's leading up to him, and Mr.  
I can remember being every one of them. But things are very  
different looking to the future, nothing is unique and far from  
being linear things could hardly be more parallel with a  
astronomical and possibly infinite number of branching, and Mr. I  
can't remember being any of them. And that is why the sense of  
first person identity has nothing to do with our expectations of  
the future but is only a function of our memories of the past.


Unfortunately, prediction and probabilities concerns the future.





 You use your crusade against pronouns

If Telmo Menezes thinks that any objection in the use of personal  
pronouns in thought experiments designed to illuminate the  
fundamental nature of personal identity


No, we agree on the personal identity before asking the prediction  
question. The duplication experiement is not designed to illuminate  
the nature of personal identity, which is made clear beforehand,  
with the 1p and 3p diaries.


You often says this, and never reply to the fact that this has been  
debunked.




is absurd then call John Clark's bluff and simply stop using them;  
then if Telmo Menezes can still express ideas on this subject  
clearly and without circularity it would prove that John Clark's  
concern that people who used such pronouns were implicitly stating  
what they were trying to prove were indeed absurd.


You say that you accept the notion of first person, but what telmo  
meant is that you stop using it in the WM-prediction, where you  
agree that you will be in the two places in the 3p view, with unique  
1p, so the P = 1/2 is just obvious. It is not deep: to this why it  
will be deep, you need to move on step 4, step 5, etc.







 Monty Hall knows that when the Helsinki Man in the sealed box in  
Moscow opens the door and sees Moscow the Moscow Man will be born  
from the ashes of the Helsinki Man,


The Helsinki Man in the sealed box in Moscow knows that too. He  
was fully informed of the protocol of the experiment.


OK but it doesn't matter if he knows the protocol of the experiment  
or not, regardless of where he is until The Helsinki Man sees  
Moscow or Moscow the Helsinki Man will remain The Helsinki Man. So  
who will become the Moscow Man?  The one who sees Moscow will  
become the Moscow Man.


Yes, but that is the H-man too, with the 3-1 view. Nothing is  
ambiguous, once we understand and APPLY the 1/3 distinction. That is  
what you never seem to do.





Oh well, the good thing about tautologies is that they're always  
true.


  Verb tenses also become problematic if you introduce time  
machines.


 Douglas Adams had something to say about this in The Hitchhikers  
Guide to the Galaxy:


 Yes, I love it too. Doesn't it worry you a bit that your  
grammatical argument is so similar to one found in an absurdist  
work of fiction?


No because if time machines actually existed then it wouldn't be  
absurd at all, the English language really would need a major  
overhaul in the way it uses verb tenses. And if matter duplicating  
machines existed the English language really would need a major  
overhaul about the way it uses personal pronouns. The only  
difference is that if the laws of physics are what we think they  
are then time machines are NOT possible, but if the laws of physics  
are what we think they are then matter duplicating machines ARE  
possible.


 Show me how to do it. Describe quantum uncertainty according to  
the MWI without personal pronouns. I know you will be able to do it  
because:

a) you like the MWI
b) you hate personal pronouns

CASE #1

Telmo Menezes shoots one photon at 2 slits with a photographic  
plate behind the slits. As the photon approaches the 

Re: Michael Graziano's theory of consciousness

2015-04-16 Thread Bruno Marchal


On 16 Apr 2015, at 02:52, meekerdb wrote:


On 4/15/2015 5:29 PM, LizR wrote:

On 15 April 2015 at 04:40, meekerdb meeke...@verizon.net wrote:
On 4/13/2015 11:31 PM, Quentin Anciaux wrote:


Le 14 avr. 2015 08:04, Stathis Papaioannou stath...@gmail.com  
a écrit :


 Certainly some theories of consciousness might not allow  
copying, but

 that cannot be a logical requirement. To claim that something is
 logically impossible is to claim that it is self-contradictory.

I don't see why a theory saying like I said in the upper paragraph  
that consciousness could not be copied would be  
selfcontradictory... You have to see that when you say  
consciousness is duplicatable, you assume a lot of things about  
the reality and how it is working, and that you're making a  
metaphysical commitment, a leap of faith concerning what you  
assume the real to be and the reality itself. That's all I'm  
saying, but clearly if computationalism is true consciousness is  
obviously duplicatable.


Quentin
In order to say what duplication of consciousness is and whether it  
is non-contradictory you need some propositional definition of it.   
Not just, an instrospective well everybody knows what it is.


Comp assumes it's an outcome of computational processes, at some  
level. Is that enough to be a propositional definition?


I don't think it's specific enough because it isn't clear whether  
computational process means a physical process or an abstract one.   
If you take computational process to be the abstract process in  
Platonia then it would not be duplicable;


?

The UD copied the abstract process an infinity of times. It might  
appear in


phi_567_(29)^45, phi_567_(29)^46, phi_567_(29)^47, phi_567_(29)^48,  
phi_567_(29)^49, phi_567_(29)^50, ... and in


phi_8999704_(0)^89,   phi_8999704_(0)^90,  phi_8999704_(0)^91,   
phi_8999704_(0)^92,  phi_8999704_(0)^93,





every copy would just be a token of the same process.  I think  
that's what Bruno means.


The consciousness will be the same, but it is multiplied (in some 3-1  
sense) in UD* (sigma_1 truth).


Bruno


But I think Stathis is thinking of a copy of an AI, not just a  
particular computation by that AI.


Brent



--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-16 Thread Bruno Marchal


On 16 Apr 2015, at 08:16, Bruce Kellett wrote:


LizR wrote:
On 16 April 2015 at 15:37, Bruce Kellett bhkell...@optusnet.com.au  
mailto:bhkell...@optusnet.com.au wrote:
   Bruno has said to me that one cannot refute a scientific finding  
by

   philosophy. One cannot, of course, refute a scientific observation
   by philosophy, but one can certainly enter a philosophical
   discussion of the meaning and interpretation of an observation. In
   an argument like Bruno's, one can certainly question the
   metaphysical and other presumptions that go into his discourse.
Yes, of course. I don't think anyone is denying that - quite the  
reverse, people who argue /against /Bruno often do so on the basis  
of unexamined metaphysical assumptions (like primary materialism)


And the contrary, that primary materialism is false, is just as much  
an unevidenced metaphysical assumption.


Are you doing this on purpose?

The fact that primary materialism is epistemologically contradictory  
is the *result* of the UD Argument (UDA).

It is not an assumption. It is what the whole UDA reasoning is for.

You assume a primary physical universe. You have to explain how  
primary matter makes it possible for a machine to distinguish a  
physical computation from an arithmetical one, and this without  
abandoning comp, to make your point.


Bruno








Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-16 Thread Bruno Marchal


On 16 Apr 2015, at 04:43, LizR wrote:

On 16 April 2015 at 12:53, Bruce Kellett bhkell...@optusnet.com.au  
wrote:

LizR wrote:
On 15 April 2015 at 10:15, John Clark johnkcl...@gmail.com mailto:johnkcl...@gmail.com 
 wrote:


Yes but I'm confused, I though you were the one arguing that Bruno
had discovered something new under the sun, a new sort of  
uncertainty
That's hardly what Bruno is claiming. Step 3 is only a small step in  
a logical argument. It shows that if our normal everyday  
consciousness is the result of computation, then it can be  
duplicated (in principle - if you have a problem with matter  
duplicators, consider an AI programme) and that this leads to what  
looks like uncertainty from one person's perspective.


You only get that impression because in Bruno's treatment of the  
case -- the two copies are immediately separated by a large distance  
and don't further interact. You might come to a different conclusion  
if you let the copies sit down together and have a chat.


That doesn't make any difference to the argument. Will I be the  
copy sitting in the chair on the left? is less dramatic than Will  
I be transported to Moscow or Washington? and hence, I suspect,  
might not make the point so clearly. But otherwise the argument goes  
through either way.


Separating them geographically was meant to mimic the different  
worlds idea from MWI. But I think that is a bit of a cheat.


I don't know where Bruno says he's mimicking the MWI (at this  
stage) ? This is a classical result, assuming classical computation  
(which according to Max Tegmark is a reasonable assumption for  
brains).


Good remark, but apparently Bruce did not hear it.

Bruno






--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-16 Thread Bruno Marchal


On 16 Apr 2015, at 02:50, Bruce Kellett wrote:


meekerdb wrote:

On 4/15/2015 4:51 PM, Bruce Kellett wrote:


That then leads to the questions of personal identity. As a  
person, my consciousness changes from moment to moment with  
changing thoughts and external stimuli, but I remain the same  
person. Can two spatially distinct consciousnesses ever be the  
same person? I don't think so, even if they stem from the same  
digital copy at some point. M-man and W-man are different persons,  
and neither is the unique closest continuer of H-man, so it is not  
that H-man is uncertain of his future -- he doesn't have one.


This differs from MWI in that, in MWI, the continuers are in  
different worlds.

Right. Like AI's in separate but identical worlds.


Don't you then run into the problem of the identity of  
indiscernibles? The programs may be run on different computers in  
our world, and thus discernible, but from inside the program there  
is only one consciousness. Just the same as if you ran the program  
at different times on the same computer. Same inputs -- same  
outputs. Not different from the point of view of the simulation.



Good. But that applies also in the computation emulated by the sigma_1  
truth, which is not physical.


Bruno




Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-16 Thread Telmo Menezes
On Thu, Apr 16, 2015 at 10:03 AM, Bruce Kellett bhkell...@optusnet.com.au
wrote:

 Telmo Menezes wrote:

 On Thu, Apr 16, 2015 at 2:53 AM, Bruce Kellett bhkell...@optusnet.com.au
 mailto:bhkell...@optusnet.com.au wrote:

 LizR wrote:

 On 15 April 2015 at 10:15, John Clark johnkcl...@gmail.com
 mailto:johnkcl...@gmail.com mailto:johnkcl...@gmail.com
 mailto:johnkcl...@gmail.com wrote:

 Yes but I'm confused, I though you were the one arguing that
 Bruno
 had discovered something new under the sun, a new sort of
 uncertainty That's hardly what Bruno is claiming. Step 3
 is only a small
 step in a logical argument. It shows that if our normal everyday
 consciousness is the result of computation, then it can be
 duplicated (in principle - if you have a problem with matter
 duplicators, consider an AI programme) and that this leads to
 what looks like uncertainty from one person's perspective.


 You only get that impression because in Bruno's treatment of the
 case -- the two copies are immediately separated by a large distance
 and don't further interact. You might come to a different conclusion
 if you let the copies sit down together and have a chat.

 The conclusion of the UDA is that comp and materialism are incompatible.
 Can you formulate a protocol where the copies sit down for a chat and
 arrive at a contradiction of the UDA's conclusion?
  Separating them geographically was meant to mimic the different
 worlds idea from MWI. But I think that is a bit of a cheat.

 It's just a simple way to label the two duplicates: Moscow man and
 Washington man. You could have the two reconstructions in the same room and
 label them as machine-A man and machine-B man and let them interact
 immediately. It wouldn't change the conclusion, because the conclusion does
 not depend on the copies having a chat or not. It would just make the
 argument harder to follow.


 No, the argument is that both copies are equally the same person as the
 original. It is that illusion that is hard to maintain if they have a chat
 and realize that they are different people.


The argument is that the two copies share the same personal diary
pre-duplication, nothing more. A chat will only confirm this.


 The real issue is personal identity through time, and in the case of ties
 for closest follower, as in this case, it fits better with the notions of
 personal identity to say that the copies are both new persons -- inheriting
 a lot from the original of course,


I think it's important to avoid mushiness. The copies inherit *everything*
from the original because we assume comp (the hypothesis that there is some
level of substitution at which a mind can be replaced with an equivalent
computation). The moment immediately after the duplication the copies start
diverging -- it is not longer the same computation. But they will share all
memories before the duplication event.


 but the original single person has not become two of the *same* person.


This is never claimed.

Telmo.




 Bruce

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Graziano's theory of consciousness

2015-04-16 Thread Telmo Menezes

 My problem with any view based on entropy is that entropy doesn't appear
 to be fundamental to physics; it is the statistically likely result when
 objects are put in a certain configuration and allowed to evolve randomly.


 There is, however, an interesting parallel to be made with Shannon's
 entropy, which is a measure of information content and not just a
 statistical effect. Once in the realm of digital physics, it becomes
 questionable if physical entropy and information entropy are separate
 things.


 Yes, and there is also black hole entropy. It's POSSIBLE that Boltzmann
 stumbled on something fundamental via a route that doesn't lead via
 fundamental physics (B's entropy is only apparent to macroscopic beings) I
 don't know if the jury has come down in favour of entropy being in some way
 fundamental to the universe, but it's certainly possible. (Though not the
 thermodynamic sort.)


I find that this fundamental physics business begs the question. It
assumes that particles and forces are fundamental and then works from
there. Interestingly, particles themselves can only be observed in the
macro world by way of statistical measures (I believe, please correct me if
I'm wrong). Here I agree with John. Labelling particles as fundamental and
mechanisms like there are more ways to be complicated than simple as
non-fundamental seems arbitrary.




 However the laws of physics are (mainly) time-symmetric, with the
 definite exception of neutral kaon decay and the possible exception of
 wave-function collapse, and an ordered state could evolve to become more
 disordered towards the past (although that would make the past appear the
 future for any beings created within that ordered state). Yet we never see
 that happening, and there is an elephant in the cosmic room, namely the
 expansion of the universe, which (istm) must always proceed in the
 direction of the AOT.


 What if the AOT is a purely 1p phenomena?


 Well, it's clearly a 3p phenomenon in that we all agree that things age
 etc. But it's perhaps purely a macroscopic creature phenomenon


Ok, this is what I meant. I was going for a multiverse-3p, not just the
all the things we can agree on-3p.



 Hence the appeal to boundary conditions. If something forces the
 universe to have zero (or very low) radius at one time extremity but not at
 the other, this asymmetry could be sufficient to drive the arrow of time in
 a particular direction.

 I've (as it were) expanded on this idea before, however, so I won't go
 on at length about it again.


 I'll search the archives when I have a bit of time.


 Briefly, the boundary condition on the universe appears to be that it has
 a big bang at one time extremity (or something like one) but not a
 corresponding crunch at the other. This alone means that the density of the
 contents of the universe is constrained to decrease globally along the time
 axis as you move away from the BB, and my contention is that this is
 probably enough to create an AOT even with the laws of physics operating -
 by assumption - time-symmetrically, when you look at the various processes
 that result from a decrease in density (and temperature, effectively, since
 particles tend to move until they reach a patch of the background fluid
 which is moving at their speed). Such outcomes include the formation of
 nuclei, atoms, and eventually gravitationally bound states like galactic
 clusters etc.


Thanks Liz. Yes, I think this makes a lot of sense. I would point out that
you are talking about entropy, even though you don't call it by name.

Telmo.



  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   4   5   6   7   8   >