Re: Reconciling Random Neuron Firings and Fading Qualia

2015-06-22 Thread Bruno Marchal


On 21 Jun 2015, at 19:55, meekerdb wrote:


On 6/21/2015 8:16 AM, Bruno Marchal wrote:
Z is what the machine can say about the []p  t points of view  
(like the bet that you will have coffee in the modified step 3  
protocol).[]coffee means you get coffee in all consistent  
extensions (which in this protocol are W and M), and t is the  
explicit conditioning that there is at least one consistent  
extension, which does not follow from []p due to incompleteness.  
You can see that []p  t is a weakening of the []p  p move.  
Incompleteness forces the machine to provides different logics for  
those nuances.


I don't understand this use of consistent.  At first I thought it  
meant logical consistency, i.e. not proving false.


That is what I mean.


  But in the above you use it as though it meant something like  
nomologically consistent.


I don't see why. I use the completeness theorem. A theory/machine is  
consistent iff it admits a model.
To be consistent = having a reality satisfying or verifying the  
beliefs.


Bruno

PS I will have to go. Some other posts could be commented tomorrow.




Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-06-21 Thread meekerdb

On 6/21/2015 8:16 AM, Bruno Marchal wrote:
Z is what the machine can say about the []p  t points of view (like the bet that you 
will have coffee in the modified step 3 protocol).[]coffee means you get coffee in all 
consistent extensions (which in this protocol are W and M), and t is the explicit 
conditioning that there is at least one consistent extension, which does not follow from 
[]p due to incompleteness. You can see that []p  t is a weakening of the []p  p 
move. Incompleteness forces the machine to provides different logics for those nuances.


I don't understand this use of consistent.  At first I thought it meant logical 
consistency, i.e. not proving false.  But in the above you use it as though it meant 
something like nomologically consistent.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-06-21 Thread Bruno Marchal


On 19 Jun 2015, at 18:36, Terren Suydam wrote:




On Mon, Jun 15, 2015 at 3:23 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 15 Jun 2015, at 15:32, Terren Suydam wrote:



On Sun, Jun 14, 2015 at 10:27 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


We can, as nobody could pretend to have the right intepretation of  
Plotinus. In fact that very question has been addressed to  
Plotinus's interpretation of Plato.


Now, it would be necessary to quote large passage of Plotinus to  
explain why indeed, even without comp, the two matters (the  
intelligible et the sensible one) are arguably sort of hypostases,  
even in the mind of Plotionus, but as a platonist, he is forced to  
consider them degenerate and belonging to the realm where God loses  
control, making matter a quasi synonym of evil (!).


The primary hypostase are the three one on the top right of this  
diagram (T, for truth, G* and S4Grz)


  T

G G*

  S4Grz


Z   Z*

X   X*


Making Z, Z*, X, X* into hypostases homogenizes nicely Plotinus  
presentation, and put a lot of pieces of the platonist puzzle into  
place. It makes other passage of Plotinus completely natural.


Note that for getting the material aspect of the (degenerate,  
secondary) hypostases, we still need to make comp explicit, by  
restricting the arithmetical intepretation of the modal logics on  
the sigma- (UD-accessible) propositions (leading to the logic  
(below G1 and G1*) S4Grz1, Z1*, X1*, where the quantum quantization  
appears.


The plain language rational is that both in Plotinus, (according to  
some passagethis is accepted by many scholars too) and in the  
universal machine mind, UDA show that psychology, theology, even  
biology, are obtained by intensional (modal) variant of the  
intellect and the ONE.


By incompleteness, provability is of the type belief. We lost  
knowledge here, we don't have []p - p in G.
This makes knowledge emulable, and meta-definable, in the language  
of the machine, by the Theaetetus method: [1]p = []p  p.


UDA justifies for matter: []p  t (cf the coffee modification of  
the step 3: a physical certainty remains true in all consistent  
continuations ([]p), and such continuation exist (t). It is the  
Timaeus bastard calculus, referred to by Plotinus in his two- 
matters chapter (ennead II-6).


Sensible matter is just a reapplication of the theaetetus, on  
intelligible matter.


I hope this helps, ask anything.

Bruno


I'm not conversant in modal logic, so a lot of that went over my  
head.



Maybe the problem is here. Modal logic, or even just modal notation  
are supposed to make things more easy.


For example, I am used to explain the difference between agnosticism  
and beliefs, by using the modality []p, that you can in this context  
read as I believe p. If ~ represents the negation, the old  
definition of atheism was []~g (the belief that God does not exist),  
and agnosticism is ~[]g (and perhaps ~[]~g too).


The language of modal logic, is the usual language of logic (p  q,  
p v q, p - q, ~p, etc.) + the symbol [], usually read as it is  
necessary (in the alethic context), or it is obligatory (in the  
deontic context), or forever (in some temporal context), or It is  
known that (in some epistemic context), or it is asserted by a  
machine (in the computer science context), etc...


p abbreviates ~[] ~(possible p = Non necessary that non p).


All good here.


Thus my request for plain language justifications. In spite of  
that language barrier I'd like to understand what I can about this  
model because it is the basis for your formal argument AUDA and  
much of what you've created seems to depend on it.


In AUDA, the theory is elementary arithmetic (Robinson Arithmetic).  
I define in that theory the statement PA asserts F, with F an  
arithmetical formula. Then RA is used only as the universal system  
emulating the conversation that I have with PA.
Everything is derived from the axioms of elementary arithmetic (but  
I could have used the combinators, the game of life, etc.). So I  
don't create anything. I interview a machine which proves  
proposition about itself, and by construction, I limit myself to  
consistent, arithmetically sound (lost of the time) machine. This  
determined all the hypostases.


It is many years years of work and the hard work has been done by  
Gödel, Löb, Grzegorczyck, Boolos, Goldblatt, Solovay.



I think it's debatable that you didn't create anything. I think  
reasonable people could disagree on whether the 8 hypostases you've  
put forward as the basis for your AUDA argument are created vs  
discovered.


Not only they are discovered, but I show that *all* self-referentially  
correct machine discover them when looking inward.





I'm coming from an open-minded position here - but trying to 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-06-19 Thread Terren Suydam
On Mon, Jun 15, 2015 at 3:23 PM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 15 Jun 2015, at 15:32, Terren Suydam wrote:


 On Sun, Jun 14, 2015 at 10:27 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 We can, as nobody could pretend to have the right intepretation of
 Plotinus. In fact that very question has been addressed to Plotinus's
 interpretation of Plato.

 Now, it would be necessary to quote large passage of Plotinus to explain
 why indeed, even without comp, the two matters (the intelligible et the
 sensible one) are arguably sort of hypostases, even in the mind of
 Plotionus, but as a platonist, he is forced to consider them degenerate and
 belonging to the realm where God loses control, making matter a quasi
 synonym of evil (!).

 The primary hypostase are the three one on the top right of this diagram
 (T, for truth, G* and S4Grz)

   T

 G G*

   S4Grz


 Z   Z*

 X   X*


 Making Z, Z*, X, X* into hypostases homogenizes nicely Plotinus
 presentation, and put a lot of pieces of the platonist puzzle into place.
 It makes other passage of Plotinus completely natural.

 Note that for getting the material aspect of the (degenerate, secondary)
 hypostases, we still need to make comp explicit, by restricting the
 arithmetical intepretation of the modal logics on the sigma-
 (UD-accessible) propositions (leading to the logic (below G1 and G1*)
 S4Grz1, Z1*, X1*, where the quantum quantization appears.

 The plain language rational is that both in Plotinus, (according to some
 passagethis is accepted by many scholars too) and in the universal
 machine mind, UDA show that psychology, theology, even biology, are
 obtained by intensional (modal) variant of the intellect and the ONE.

 By incompleteness, provability is of the type belief. We lost
 knowledge here, we don't have []p - p in G.
 This makes knowledge emulable, and meta-definable, in the language of the
 machine, by the Theaetetus method: [1]p = []p  p.

 UDA justifies for matter: []p  t (cf the coffee modification of the
 step 3: a physical certainty remains true in all consistent continuations
 ([]p), and such continuation exist (t). It is the Timaeus bastard
 calculus, referred to by Plotinus in his two-matters chapter (ennead II-6).

 Sensible matter is just a reapplication of the theaetetus, on
 intelligible matter.

 I hope this helps, ask anything.

 Bruno


 I'm not conversant in modal logic, so a lot of that went over my head.



 Maybe the problem is here. Modal logic, or even just modal notation are
 supposed to make things more easy.

 For example, I am used to explain the difference between agnosticism and
 beliefs, by using the modality []p, that you can in this context read as I
 believe p. If ~ represents the negation, the old definition of atheism
 was []~g (the belief that God does not exist), and agnosticism is ~[]g (and
 perhaps ~[]~g too).

 The language of modal logic, is the usual language of logic (p  q, p v q,
 p - q, ~p, etc.) + the symbol [], usually read as it is necessary (in
 the alethic context), or it is obligatory (in the deontic context), or
 forever (in some temporal context), or It is known that (in some
 epistemic context), or it is asserted by a machine (in the computer science
 context), etc...

 p abbreviates ~[] ~(possible p = Non necessary that non p).


All good here.



 Thus my request for plain language justifications. In spite of that
 language barrier I'd like to understand what I can about this model because
 it is the basis for your formal argument AUDA and much of what you've
 created seems to depend on it.


 In AUDA, the theory is elementary arithmetic (Robinson Arithmetic). I
 define in that theory the statement PA asserts F, with F an arithmetical
 formula. Then RA is used only as the universal system emulating the
 conversation that I have with PA.
 Everything is derived from the axioms of elementary arithmetic (but I
 could have used the combinators, the game of life, etc.). So I don't create
 anything. I interview a machine which proves proposition about itself, and
 by construction, I limit myself to consistent, arithmetically sound (lost
 of the time) machine. This determined all the hypostases.

 It is many years years of work and the hard work has been done by Gödel,
 Löb, Grzegorczyck, Boolos, Goldblatt, Solovay.


I think it's debatable that you didn't create anything. I think reasonable
people could disagree on whether the 8 hypostases you've put forward as the
basis for your AUDA argument are created vs discovered. I'm coming from an
open-minded position here - but trying to assert that you're not creating
anything strikes me as a move to grant unearned legitimacy to it.



 I still am not clear on why you invent three new hypostases, granting
 the five from Plotinus (by creating G/G*, X/X*, and Z/Z* instead 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-06-15 Thread Terren Suydam
On Sun, Jun 14, 2015 at 10:27 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 08 Jun 2015, at 20:50, Terren Suydam wrote:



 On Mon, Jun 8, 2015 at 2:20 PM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 08 Jun 2015, at 15:58, Terren Suydam wrote:



 On Thu, Jun 4, 2015 at 1:59 PM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 04 Jun 2015, at 18:01, Terren Suydam wrote:


 OK, so given a certain interpretation, some scholars added two hypostases
 to the original three.


 It is very natural to do. The ennead VI.1 describes the three initial
 hypostases, and the subject of what is matter, notably the intelligible
 matter and the sensible matter, is the subject of the ennead II.4.

 It is a simplification of vocabulary, more than another interpretation.



 Then, it appears that you make a third interpretation by splitting the
 intellect, and the two matters.

 What justifies these splits?


 I am not sure I understand? Plotinus splits them too, as they are
 different subject matter. The intellect is the nous, the worlds of idea,
 and here the world of what the machine can prove (seen by her, and by God:
 G and G*).
 But matter is what you can predict with the FPI, and so it is a different
 notion, and likewise, in Plotinus, matter is given by a platonist rereading
 of Aristotle theory of indetermination. This is done in the ennead II-4.
 Why should we not split intellect and matter, which in appearance are
 very different, and the problem is more in consistently relating them. If
 we don't distinguish them, we cannot explain the problem of relating them.


 Sorry, my question was ambiguous. What I mean is that after adding the two
 hypostases for the two matters, you have five hypostases, the initial
 three plus the two for matter.

 Then, you arrive at 8 hypostases by splitting the intellect into two, and
 you do the same for each of the matter hyspostases. My question is what
 plain-language rationale justifies creating these three extra hypostases?
 And can we really say we're still talking about Plotinus's hypostases at
 this point?


 We can, as nobody could pretend to have the right intepretation of
 Plotinus. In fact that very question has been addressed to Plotinus's
 interpretation of Plato.

 Now, it would be necessary to quote large passage of Plotinus to explain
 why indeed, even without comp, the two matters (the intelligible et the
 sensible one) are arguably sort of hypostases, even in the mind of
 Plotionus, but as a platonist, he is forced to consider them degenerate and
 belonging to the realm where God loses control, making matter a quasi
 synonym of evil (!).

 The primary hypostase are the three one on the top right of this diagram
 (T, for truth, G* and S4Grz)

   T

 G G*

   S4Grz


 Z   Z*

 X   X*


 Making Z, Z*, X, X* into hypostases homogenizes nicely Plotinus
 presentation, and put a lot of pieces of the platonist puzzle into place.
 It makes other passage of Plotinus completely natural.

 Note that for getting the material aspect of the (degenerate, secondary)
 hypostases, we still need to make comp explicit, by restricting the
 arithmetical intepretation of the modal logics on the sigma-
 (UD-accessible) propositions (leading to the logic (below G1 and G1*)
 S4Grz1, Z1*, X1*, where the quantum quantization appears.

 The plain language rational is that both in Plotinus, (according to some
 passagethis is accepted by many scholars too) and in the universal
 machine mind, UDA show that psychology, theology, even biology, are
 obtained by intensional (modal) variant of the intellect and the ONE.

 By incompleteness, provability is of the type belief. We lost
 knowledge here, we don't have []p - p in G.
 This makes knowledge emulable, and meta-definable, in the language of the
 machine, by the Theaetetus method: [1]p = []p  p.

 UDA justifies for matter: []p  t (cf the coffee modification of the
 step 3: a physical certainty remains true in all consistent continuations
 ([]p), and such continuation exist (t). It is the Timaeus bastard
 calculus, referred to by Plotinus in his two-matters chapter (ennead II-6).

 Sensible matter is just a reapplication of the theaetetus, on intelligible
 matter.

 I hope this helps, ask anything.

 Bruno


I'm not conversant in modal logic, so a lot of that went over my head. Thus
my request for plain language justifications. In spite of that language
barrier I'd like to understand what I can about this model because it is
the basis for your formal argument AUDA and much of what you've created
seems to depend on it.

I still am not clear on why you invent three new hypostases, granting the
five from Plotinus (by creating G/G*, X/X*, and Z/Z* instead of just G, X,
and Z), except that you say [it] homogenizes nicely Plotinus presentation,
and put a lot of pieces of the platonist 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-06-15 Thread Bruno Marchal


On 15 Jun 2015, at 15:32, Terren Suydam wrote:



On Sun, Jun 14, 2015 at 10:27 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 08 Jun 2015, at 20:50, Terren Suydam wrote:




On Mon, Jun 8, 2015 at 2:20 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 08 Jun 2015, at 15:58, Terren Suydam wrote:




On Thu, Jun 4, 2015 at 1:59 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 04 Jun 2015, at 18:01, Terren Suydam wrote:

OK, so given a certain interpretation, some scholars added two  
hypostases to the original three.


It is very natural to do. The ennead VI.1 describes the three  
initial hypostases, and the subject of what is matter, notably  
the intelligible matter and the sensible matter, is the subject of  
the ennead II.4.


It is a simplification of vocabulary, more than another  
interpretation.





Then, it appears that you make a third interpretation by splitting  
the intellect, and the two matters.

What justifies these splits?


I am not sure I understand? Plotinus splits them too, as they are  
different subject matter. The intellect is the nous, the worlds  
of idea, and here the world of what the machine can prove (seen by  
her, and by God: G and G*).
But matter is what you can predict with the FPI, and so it is a  
different notion, and likewise, in Plotinus, matter is given by a  
platonist rereading of Aristotle theory of indetermination. This is  
done in the ennead II-4.
Why should we not split intellect and matter, which in appearance  
are very different, and the problem is more in consistently  
relating them. If we don't distinguish them, we cannot explain the  
problem of relating them.



Sorry, my question was ambiguous. What I mean is that after adding  
the two hypostases for the two matters, you have five hypostases,  
the initial three plus the two for matter.


Then, you arrive at 8 hypostases by splitting the intellect into  
two, and you do the same for each of the matter hyspostases. My  
question is what plain-language rationale justifies creating these  
three extra hypostases?  And can we really say we're still talking  
about Plotinus's hypostases at this point?


We can, as nobody could pretend to have the right intepretation of  
Plotinus. In fact that very question has been addressed to  
Plotinus's interpretation of Plato.


Now, it would be necessary to quote large passage of Plotinus to  
explain why indeed, even without comp, the two matters (the  
intelligible et the sensible one) are arguably sort of hypostases,  
even in the mind of Plotionus, but as a platonist, he is forced to  
consider them degenerate and belonging to the realm where God loses  
control, making matter a quasi synonym of evil (!).


The primary hypostase are the three one on the top right of this  
diagram (T, for truth, G* and S4Grz)


  T

G G*

  S4Grz


Z   Z*

X   X*


Making Z, Z*, X, X* into hypostases homogenizes nicely Plotinus  
presentation, and put a lot of pieces of the platonist puzzle into  
place. It makes other passage of Plotinus completely natural.


Note that for getting the material aspect of the (degenerate,  
secondary) hypostases, we still need to make comp explicit, by  
restricting the arithmetical intepretation of the modal logics on  
the sigma- (UD-accessible) propositions (leading to the logic  
(below G1 and G1*) S4Grz1, Z1*, X1*, where the quantum quantization  
appears.


The plain language rational is that both in Plotinus, (according to  
some passagethis is accepted by many scholars too) and in the  
universal machine mind, UDA show that psychology, theology, even  
biology, are obtained by intensional (modal) variant of the  
intellect and the ONE.


By incompleteness, provability is of the type belief. We lost  
knowledge here, we don't have []p - p in G.
This makes knowledge emulable, and meta-definable, in the language  
of the machine, by the Theaetetus method: [1]p = []p  p.


UDA justifies for matter: []p  t (cf the coffee modification of  
the step 3: a physical certainty remains true in all consistent  
continuations ([]p), and such continuation exist (t). It is the  
Timaeus bastard calculus, referred to by Plotinus in his two- 
matters chapter (ennead II-6).


Sensible matter is just a reapplication of the theaetetus, on  
intelligible matter.


I hope this helps, ask anything.

Bruno


I'm not conversant in modal logic, so a lot of that went over my head.



Maybe the problem is here. Modal logic, or even just modal notation  
are supposed to make things more easy.


For example, I am used to explain the difference between agnosticism  
and beliefs, by using the modality []p, that you can in this context  
read as I believe p. If ~ represents the negation, the old  
definition of atheism was []~g (the belief that God does not exist),  
and agnosticism is ~[]g (and 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-06-14 Thread Bruno Marchal


On 08 Jun 2015, at 20:50, Terren Suydam wrote:




On Mon, Jun 8, 2015 at 2:20 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 08 Jun 2015, at 15:58, Terren Suydam wrote:




On Thu, Jun 4, 2015 at 1:59 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 04 Jun 2015, at 18:01, Terren Suydam wrote:

OK, so given a certain interpretation, some scholars added two  
hypostases to the original three.


It is very natural to do. The ennead VI.1 describes the three  
initial hypostases, and the subject of what is matter, notably the  
intelligible matter and the sensible matter, is the subject of the  
ennead II.4.


It is a simplification of vocabulary, more than another  
interpretation.





Then, it appears that you make a third interpretation by splitting  
the intellect, and the two matters.

What justifies these splits?


I am not sure I understand? Plotinus splits them too, as they are  
different subject matter. The intellect is the nous, the worlds of  
idea, and here the world of what the machine can prove (seen by her,  
and by God: G and G*).
But matter is what you can predict with the FPI, and so it is a  
different notion, and likewise, in Plotinus, matter is given by a  
platonist rereading of Aristotle theory of indetermination. This is  
done in the ennead II-4.
Why should we not split intellect and matter, which in appearance  
are very different, and the problem is more in consistently relating  
them. If we don't distinguish them, we cannot explain the problem of  
relating them.



Sorry, my question was ambiguous. What I mean is that after adding  
the two hypostases for the two matters, you have five hypostases,  
the initial three plus the two for matter.


Then, you arrive at 8 hypostases by splitting the intellect into  
two, and you do the same for each of the matter hyspostases. My  
question is what plain-language rationale justifies creating these  
three extra hypostases?  And can we really say we're still talking  
about Plotinus's hypostases at this point?


We can, as nobody could pretend to have the right intepretation of  
Plotinus. In fact that very question has been addressed to Plotinus's  
interpretation of Plato.


Now, it would be necessary to quote large passage of Plotinus to  
explain why indeed, even without comp, the two matters (the  
intelligible et the sensible one) are arguably sort of hypostases,  
even in the mind of Plotionus, but as a platonist, he is forced to  
consider them degenerate and belonging to the realm where God loses  
control, making matter a quasi synonym of evil (!).


The primary hypostase are the three one on the top right of this  
diagram (T, for truth, G* and S4Grz)


  T

G G*

  S4Grz


Z   Z*

X   X*


Making Z, Z*, X, X* into hypostases homogenizes nicely Plotinus  
presentation, and put a lot of pieces of the platonist puzzle into  
place. It makes other passage of Plotinus completely natural.


Note that for getting the material aspect of the (degenerate,  
secondary) hypostases, we still need to make comp explicit, by  
restricting the arithmetical intepretation of the modal logics on the  
sigma- (UD-accessible) propositions (leading to the logic (below G1  
and G1*) S4Grz1, Z1*, X1*, where the quantum quantization appears.


The plain language rational is that both in Plotinus, (according to  
some passagethis is accepted by many scholars too) and in the  
universal machine mind, UDA show that psychology, theology, even  
biology, are obtained by intensional (modal) variant of the intellect  
and the ONE.


By incompleteness, provability is of the type belief. We lost  
knowledge here, we don't have []p - p in G.
This makes knowledge emulable, and meta-definable, in the language of  
the machine, by the Theaetetus method: [1]p = []p  p.


UDA justifies for matter: []p  t (cf the coffee modification of the  
step 3: a physical certainty remains true in all consistent  
continuations ([]p), and such continuation exist (t). It is the  
Timaeus bastard calculus, referred to by Plotinus in his two-matters  
chapter (ennead II-6).


Sensible matter is just a reapplication of the theaetetus, on  
intelligible matter.


I hope this helps, ask anything.

Bruno







Terren




And can you make this justification in plain language in a way that  
doesn't appear to be a just so interpretation that makes it  
easier for AUDA to go through?


God, or the One,  is played by the notion of Arithmetical Truth.  
Machines and humans cannot know it, or explore it mechanically,  
and it is the roots of the web of machines dreams, but also of their  
semantics, in a large part.
The Nous, is what machine can prove about themselves, and their  
remation with God, etc.
The Soul, is where the machine proves true things, but not  
accidentally: as it is defined by the conjunction of p 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-06-08 Thread Bruno Marchal


On 08 Jun 2015, at 15:58, Terren Suydam wrote:




On Thu, Jun 4, 2015 at 1:59 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 04 Jun 2015, at 18:01, Terren Suydam wrote:




On Wed, Jun 3, 2015 at 9:31 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 03 Jun 2015, at 14:58, Terren Suydam wrote:


It would be like saying that bats' echolocation is an illusion.  
Not a perfect analogy, because a bat's facility for echolocation  
is rooted in its physiology, not constructed, but the point is  
that with both the ego and echolocation, the experiencer's  
consciousness is provided with a particular character it would not  
otherwise have been able to experience.


I usually distinguish 8 forms of ego (the eight hypostases). Each  
time it is the sigma_1 reality, but viewed from a different angle.  
The illusion (with the negative connotations) are more in the  
confusion between two hypostases, than each hypostase per se.

I am not sure we disagree, except on some choice of words.

Bruno

I agree that we agree :-)


I agree with this to :)




I know you've posted it in the past but can you point me to a  
summary of the original Plotinus model with the 8 hypostases?  My  
cursory googling reveals only 3 attributed to Plotinus (One,  
Intellect, Soul).


That is what Plotinus called the primary hypostases. But there is no  
secondary hypostases, but then some sholars, and myself, agree  
that his two matters ennead describes actually two (degenerate,  
secondary) hypostases. So in the enneads ypu have the three primary  
hypostases, which in the machine theology is given by truth (One,  
p), provable (Intellect, []p) and Soul (that we get with the  
theatetus idea on the One and Intellect, []p  p), and the two  
matters: intelligible matter ([]p  t) and sensible matter ([]p   
t  p).


One = p
Intellect = []p (splits in two: G and G*)
Soul = []p  p (does not split: S4Grz)

Intelligible matter = []p  t (splits in two Z and Z*)
Sensible matter = []p  t  p.(splits in two: X and X*).

To get the propositional physics, you have to restrict the p on the  
sigma_1 truth (the computable, the UD-accessible states).


For the neoplatonist; matter is almost where God loses control, and  
can't intervene, it is close to the FPI idea, as you know that even  
God cannot predict to you, when in Helsinki, where you will feel to  
be after the split. For the platonist, matter is really where even  
the form can't handle the indetermination. It is of the type ~[]#,  
or #. It is also the place giving rooms for the contingencies, and  
what we can hope and eventually build or recover.


Bruno


OK, so given a certain interpretation, some scholars added two  
hypostases to the original three.


It is very natural to do. The ennead VI.1 describes the three initial  
hypostases, and the subject of what is matter, notably the  
intelligible matter and the sensible matter, is the subject of the  
ennead II.4.


It is a simplification of vocabulary, more than another interpretation.




Then, it appears that you make a third interpretation by splitting  
the intellect, and the two matters.

What justifies these splits?


I am not sure I understand? Plotinus splits them too, as they are  
different subject matter. The intellect is the nous, the worlds of  
idea, and here the world of what the machine can prove (seen by her,  
and by God: G and G*).
But matter is what you can predict with the FPI, and so it is a  
different notion, and likewise, in Plotinus, matter is given by a  
platonist rereading of Aristotle theory of indetermination. This is  
done in the ennead II-4.
Why should we not split intellect and matter, which in appearance are  
very different, and the problem is more in consistently relating them.  
If we don't distinguish them, we cannot explain the problem of  
relating them.




And can you make this justification in plain language in a way that  
doesn't appear to be a just so interpretation that makes it easier  
for AUDA to go through?


God, or the One,  is played by the notion of Arithmetical Truth.  
Machines and humans cannot know it, or explore it mechanically, and  
it is the roots of the web of machines dreams, but also of their  
semantics, in a large part.
The Nous, is what machine can prove about themselves, and their  
remation with God, etc.
The Soul, is where the machine proves true things, but not  
accidentally: as it is defined by the conjunction of p and the  
provability of p, for any (arithmetical) p. It is the idea of  
Theaetetus, that Plotinus might use implicitly (according to Bréhier),  
and which just works: it give a logic of an unameable, non-machine,  
knower.


For matter; you want that the measure one for an event/proposition  
is certain, when it is true in all consistent continuation (this asks  
for []p, technically), but also, by incompleteness, this asks fro the  
diamond t (consistency, having a model, having at least one  
continuation, not belonging to a cul-de-sac world 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-06-08 Thread Terren Suydam
On Mon, Jun 8, 2015 at 2:20 PM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 08 Jun 2015, at 15:58, Terren Suydam wrote:



 On Thu, Jun 4, 2015 at 1:59 PM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 04 Jun 2015, at 18:01, Terren Suydam wrote:


 OK, so given a certain interpretation, some scholars added two hypostases
 to the original three.


 It is very natural to do. The ennead VI.1 describes the three initial
 hypostases, and the subject of what is matter, notably the intelligible
 matter and the sensible matter, is the subject of the ennead II.4.

 It is a simplification of vocabulary, more than another interpretation.



 Then, it appears that you make a third interpretation by splitting the
 intellect, and the two matters.

 What justifies these splits?


 I am not sure I understand? Plotinus splits them too, as they are
 different subject matter. The intellect is the nous, the worlds of idea,
 and here the world of what the machine can prove (seen by her, and by God:
 G and G*).
 But matter is what you can predict with the FPI, and so it is a different
 notion, and likewise, in Plotinus, matter is given by a platonist rereading
 of Aristotle theory of indetermination. This is done in the ennead II-4.
 Why should we not split intellect and matter, which in appearance are very
 different, and the problem is more in consistently relating them. If we
 don't distinguish them, we cannot explain the problem of relating them.


Sorry, my question was ambiguous. What I mean is that after adding the two
hypostases for the two matters, you have five hypostases, the initial
three plus the two for matter.

Then, you arrive at 8 hypostases by splitting the intellect into two, and
you do the same for each of the matter hyspostases. My question is what
plain-language rationale justifies creating these three extra hypostases?
And can we really say we're still talking about Plotinus's hypostases at
this point?

Terren





 And can you make this justification in plain language in a way that
 doesn't appear to be a just so interpretation that makes it easier for
 AUDA to go through?


 God, or the One,  is played by the notion of Arithmetical Truth. Machines
 and humans cannot know it, or explore it mechanically, and it is the
 roots of the web of machines dreams, but also of their semantics, in a
 large part.
 The Nous, is what machine can prove about themselves, and their remation
 with God, etc.
 The Soul, is where the machine proves true things, but not accidentally:
 as it is defined by the conjunction of p and the provability of p, for any
 (arithmetical) p. It is the idea of Theaetetus, that Plotinus might use
 implicitly (according to Bréhier), and which just works: it give a logic of
 an unameable, non-machine, knower.

 For matter; you want that the measure one for an event/proposition is
 certain, when it is true in all consistent continuation (this asks for []p,
 technically), but also, by incompleteness, this asks fro the diamond t
 (consistency, having a model, having at least one continuation, not
 belonging to a cul-de-sac world (all those things are mathematically
 equivalent in our setting). So prediction 1 (like the coffee-cup in the
 WM-duplication + promise of coffee made at both reconstitution place) would
 be []coffee  coffee. There is a coffee in all my extensions, and there
 is at least one extension (the act of faith made explicit).
 So the logic of physical yes is given by []p  t, with p sigma_1 (to
 get the restriction on the universal dovetailing). That corresponds to
 Plotinus theory of the intelligible matter, and that gives a pair of
 quantum logic (by applying a result of Goldblatt).
 The same with the sensible matter, where we replay the original idea of
 Theatetus, on intelligible matter.

 Actually, we get also a quantum logic with the first application of the
 Theaetetus, which put some light perhaps why Plotinus ascribe the roots of
 matter already to the soul activity. I thought at first that arithmetic
 would refute that idea of Plotinus, but the math confirms this.

 I will have to go, and will be slowed down more and more, as I have the
 June exams now. Feel free to ask any question though. But you might need to
 be patient for the comment/answers.

 Bruno





 Terren

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


 http://iridia.ulb.ac.be/~marchal/



  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-06-08 Thread Terren Suydam
On Thu, Jun 4, 2015 at 1:59 PM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 04 Jun 2015, at 18:01, Terren Suydam wrote:



 On Wed, Jun 3, 2015 at 9:31 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 03 Jun 2015, at 14:58, Terren Suydam wrote:


 It would be like saying that bats' echolocation is an illusion. Not a
 perfect analogy, because a bat's facility for echolocation is rooted in its
 physiology, not constructed, but the point is that with both the ego and
 echolocation, the experiencer's consciousness is provided with a particular
 character it would not otherwise have been able to experience.


 I usually distinguish 8 forms of ego (the eight hypostases). Each time it
 is the sigma_1 reality, but viewed from a different angle. The illusion
 (with the negative connotations) are more in the confusion between two
 hypostases, than each hypostase per se.
 I am not sure we disagree, except on some choice of words.

 Bruno


 I agree that we agree :-)


 I agree with this to :)



 I know you've posted it in the past but can you point me to a summary of
 the original Plotinus model with the 8 hypostases?  My cursory googling
 reveals only 3 attributed to Plotinus (One, Intellect, Soul).


 That is what Plotinus called the primary hypostases. But there is no
 secondary hypostases, but then some sholars, and myself, agree that his
 two matters ennead describes actually two (degenerate, secondary)
 hypostases. So in the enneads ypu have the three primary hypostases, which
 in the machine theology is given by truth (One, p), provable (Intellect,
 []p) and Soul (that we get with the theatetus idea on the One and
 Intellect, []p  p), and the two matters: intelligible matter ([]p  t)
 and sensible matter ([]p  t  p).

 One = p
 Intellect = []p (splits in two: G and G*)
 Soul = []p  p (does not split: S4Grz)

 Intelligible matter = []p  t (splits in two Z and Z*)
 Sensible matter = []p  t  p.(splits in two: X and X*).

 To get the propositional physics, you have to restrict the p on the
 sigma_1 truth (the computable, the UD-accessible states).

 For the neoplatonist; matter is almost where God loses control, and can't
 intervene, it is close to the FPI idea, as you know that even God cannot
 predict to you, when in Helsinki, where you will feel to be after the
 split. For the platonist, matter is really where even the form can't handle
 the indetermination. It is of the type ~[]#, or #. It is also the place
 giving rooms for the contingencies, and what we can hope and eventually
 build or recover.

 Bruno


OK, so given a certain interpretation, some scholars added two hypostases
to the original three.

Then, it appears that you make a third interpretation by splitting the
intellect, and the two matters. What justifies these splits?  And can you
make this justification in plain language in a way that doesn't appear to
be a just so interpretation that makes it easier for AUDA to go through?

Terren

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-06-04 Thread Bruno Marchal


On 04 Jun 2015, at 18:01, Terren Suydam wrote:




On Wed, Jun 3, 2015 at 9:31 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 03 Jun 2015, at 14:58, Terren Suydam wrote:


It would be like saying that bats' echolocation is an illusion. Not  
a perfect analogy, because a bat's facility for echolocation is  
rooted in its physiology, not constructed, but the point is that  
with both the ego and echolocation, the experiencer's consciousness  
is provided with a particular character it would not otherwise have  
been able to experience.


I usually distinguish 8 forms of ego (the eight hypostases). Each  
time it is the sigma_1 reality, but viewed from a different angle.  
The illusion (with the negative connotations) are more in the  
confusion between two hypostases, than each hypostase per se.

I am not sure we disagree, except on some choice of words.

Bruno

I agree that we agree :-)


I agree with this to :)




I know you've posted it in the past but can you point me to a  
summary of the original Plotinus model with the 8 hypostases?  My  
cursory googling reveals only 3 attributed to Plotinus (One,  
Intellect, Soul).


That is what Plotinus called the primary hypostases. But there is no  
secondary hypostases, but then some sholars, and myself, agree that  
his two matters ennead describes actually two (degenerate,  
secondary) hypostases. So in the enneads ypu have the three primary  
hypostases, which in the machine theology is given by truth (One,  
p), provable (Intellect, []p) and Soul (that we get with the theatetus  
idea on the One and Intellect, []p  p), and the two matters:  
intelligible matter ([]p  t) and sensible matter ([]p  t  p).


One = p
Intellect = []p (splits in two: G and G*)
Soul = []p  p (does not split: S4Grz)

Intelligible matter = []p  t (splits in two Z and Z*)
Sensible matter = []p  t  p.(splits in two: X and X*).

To get the propositional physics, you have to restrict the p on the  
sigma_1 truth (the computable, the UD-accessible states).


For the neoplatonist; matter is almost where God loses control, and  
can't intervene, it is close to the FPI idea, as you know that even  
God cannot predict to you, when in Helsinki, where you will feel to be  
after the split. For the platonist, matter is really where even the  
form can't handle the indetermination. It is of the type ~[]#, or #.  
It is also the place giving rooms for the contingencies, and what we  
can hope and eventually build or recover.


Bruno





Terren

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-06-04 Thread Terren Suydam
On Wed, Jun 3, 2015 at 9:31 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 03 Jun 2015, at 14:58, Terren Suydam wrote:


 It would be like saying that bats' echolocation is an illusion. Not a
 perfect analogy, because a bat's facility for echolocation is rooted in its
 physiology, not constructed, but the point is that with both the ego and
 echolocation, the experiencer's consciousness is provided with a particular
 character it would not otherwise have been able to experience.


 I usually distinguish 8 forms of ego (the eight hypostases). Each time it
 is the sigma_1 reality, but viewed from a different angle. The illusion
 (with the negative connotations) are more in the confusion between two
 hypostases, than each hypostase per se.
 I am not sure we disagree, except on some choice of words.

 Bruno


I agree that we agree :-)

I know you've posted it in the past but can you point me to a summary of
the original Plotinus model with the 8 hypostases?  My cursory googling
reveals only 3 attributed to Plotinus (One, Intellect, Soul).

Terren

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-06-03 Thread Terren Suydam
On Fri, May 29, 2015 at 10:34 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 28 May 2015, at 20:12, Terren Suydam wrote:

 On Thu, May 28, 2015 at 4:20 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 28 May 2015, at 05:16, Terren Suydam wrote:

 Language starts to get in the way here, but what you're suggesting is
 akin to someone who is blind-drunk - they will have no memory of their
 experience, but I think most would say a blind-drunk is conscious.

 But I think the driving scenario is different in that my conscious
 attention is elsewhere... there's competition for the resource of
 attention. I don't really think I'm conscious of the feeling of the floor
 pressing my feet until I pay attention to it.

 My thinking on this is that human consciousness involves a unified/global
 dynamic, and the unifying thread is the self-model or ego. This allows for
 top-down control of attention. When parts of the sensorium (and other
 aspects of the mind) are not involved or included in this global dynamic,
 there is a significant sense in which it does not participate in that human
 consciousness. This is not to say that there is no other consciousness -
 just that it is perhaps of a lower form in a hierarchy of consciousness.

 I would highlight that human consciousness is somewhat unique in that the
 ego - a cultural innovation dependent on the development of language - is
 not present in animals. Without that unifying thread of ego, I suggest that
 animal consciousness is not unlike our dream consciousness, which is an
 arena of awareness when the thread of our ego dissolves. A visual I have is
 that in the waking state, the ego is a bag that encapsulates all the parts
 that make up our psyche. In dreamtime, the drawstring on the bag loosens
 and the parts float out, and get activated according to whatever seemingly
 random processes that constitute dreams.

 In lucid dreams, the ego is restored (i.e. we say to ourselves, *I* *am*
 dreaming) - and we regain consciousness.


 We regain the ego (perhaps the ego illusion), but as you say yourself
 above, we are conscious in the non-lucid dream too. Lucidity might be a
 relative notion, as we can never be sure to be awaken. The false-awakening,
 very frequent for people trained in lucid dreaming, illustrate somehow this
 phenomena.


 Right. My point is not that we aren't conscious in non-lucid dream states,
 but that there is a qualitative difference in consciousness between those
 two states, and that lucid-dream consciousness is much closer to waking
 consciousness than to dream consciousness, almost by definition. It's this
 fact I'm trying to explain by proposing the role of the ego in human
 consciousness.


 OK. usually I make that difference between simple universality (conscious,
 but not necessarily self-conscious), and Löbianity (self-conscious). It is
 the difference between Robinson Arithmetic and Peano Arithmetic (= RA + the
 induction axioms).

 It is an open problem for me if RA is more or less conscious than PA. PA
 has much stronger cognitive abilities, but this can filter more
 consciousness and leads to more delusion, notably that ego.

 I don't insist too much on this, as I am not yet quite  sure. It leads to
 the idea that brains filter consciousness, by hallucinating the person.


I'm not so sure that filtering is the best analogy, by itself anyway. No
doubt that there is filtering going on, but I think the forms constructed
by the brain may also have a *transforming *or *focusing* effect as well.
It may not the case, in other words, that consciousness is merely,
destructively, filtered by our egos, but there is a sense too in which the
consciousness we experience is made sharper by virtue of being shaped or
transformed, particularly by this adaptation of reifying the self-model.

I make this remark because most of the time I use consciousness in its
 rough general sense, in which animals, dreamers, ... are conscious.


 Of course... my points are about what kinds of aspects of being human
 might privilege our consciousness, in an attempt to understand
 consciousness better.


 OK. I understand.



 Then, I am not sure higher mammals have not yet already some ego, and
 self-consciousness, well before language. Language just put the ego in
 evidence, and that allows further reflexive loops, which can lead to
 further illusions and soul falling situation.


 Right, one could argue that even insects have some kind of self-model.
 There is no doubt a spectrum of sophistication of self-models, but I would
 distinguish all of them from the human ego. I guess I was too quick before
 when I equated the two. The key distinction between a self-model and an ego
 is the ability to refer to oneself as an object - this, and the ability to
 *identify* with that object, reifies the self model in a way that appears
 to me to be crucial to human consciousness. I don't think this is really
 possible without language.


 Probably. But that identification is 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-06-03 Thread Bruno Marchal


On 03 Jun 2015, at 14:58, Terren Suydam wrote:



On Fri, May 29, 2015 at 10:34 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 28 May 2015, at 20:12, Terren Suydam wrote:
On Thu, May 28, 2015 at 4:20 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 28 May 2015, at 05:16, Terren Suydam wrote:

Language starts to get in the way here, but what you're suggesting  
is akin to someone who is blind-drunk - they will have no memory  
of their experience, but I think most would say a blind-drunk is  
conscious.


But I think the driving scenario is different in that my conscious  
attention is elsewhere... there's competition for the resource of  
attention. I don't really think I'm conscious of the feeling of  
the floor pressing my feet until I pay attention to it.


My thinking on this is that human consciousness involves a unified/ 
global dynamic, and the unifying thread is the self-model or ego.  
This allows for top-down control of attention. When parts of the  
sensorium (and other aspects of the mind) are not involved or  
included in this global dynamic, there is a significant sense in  
which it does not participate in that human consciousness. This is  
not to say that there is no other consciousness - just that it is  
perhaps of a lower form in a hierarchy of consciousness.


I would highlight that human consciousness is somewhat unique in  
that the ego - a cultural innovation dependent on the development  
of language - is not present in animals. Without that unifying  
thread of ego, I suggest that animal consciousness is not unlike  
our dream consciousness, which is an arena of awareness when the  
thread of our ego dissolves. A visual I have is that in the waking  
state, the ego is a bag that encapsulates all the parts that make  
up our psyche. In dreamtime, the drawstring on the bag loosens and  
the parts float out, and get activated according to whatever  
seemingly random processes that constitute dreams.


In lucid dreams, the ego is restored (i.e. we say to ourselves, I  
am dreaming) - and we regain consciousness.


We regain the ego (perhaps the ego illusion), but as you say  
yourself above, we are conscious in the non-lucid dream too.  
Lucidity might be a relative notion, as we can never be sure to be  
awaken. The false-awakening, very frequent for people trained in  
lucid dreaming, illustrate somehow this phenomena.


Right. My point is not that we aren't conscious in non-lucid dream  
states, but that there is a qualitative difference in consciousness  
between those two states, and that lucid-dream consciousness is  
much closer to waking consciousness than to dream consciousness,  
almost by definition. It's this fact I'm trying to explain by  
proposing the role of the ego in human consciousness.


OK. usually I make that difference between simple universality  
(conscious, but not necessarily self-conscious), and Löbianity (self- 
conscious). It is the difference between Robinson Arithmetic and  
Peano Arithmetic (= RA + the induction axioms).


It is an open problem for me if RA is more or less conscious than  
PA. PA has much stronger cognitive abilities, but this can filter  
more consciousness and leads to more delusion, notably that ego.


I don't insist too much on this, as I am not yet quite  sure. It  
leads to the idea that brains filter consciousness, by hallucinating  
the person.



I'm not so sure that filtering is the best analogy, by itself  
anyway. No doubt that there is filtering going on, but I think the  
forms constructed by the brain may also have a transforming or  
focusing effect as well. It may not the case, in other words, that  
consciousness is merely, destructively, filtered by our egos, but  
there is a sense too in which the consciousness we experience is  
made sharper by virtue of being shaped or transformed,  
particularly by this adaptation of reifying the self-model.


I am OK with this. Brain does not just filter, they do a lot of  
information processing which adds a lot to the filtering, including  
the angles or points of view.







I make this remark because most of the time I use consciousness  
in its rough general sense, in which animals, dreamers, ... are  
conscious.


Of course... my points are about what kinds of aspects of being  
human might privilege our consciousness, in an attempt to  
understand consciousness better.


OK. I understand.


Then, I am not sure higher mammals have not yet already some ego,  
and self-consciousness, well before language. Language just put the  
ego in evidence, and that allows further reflexive loops, which can  
lead to further illusions and soul falling situation.


Right, one could argue that even insects have some kind of self- 
model. There is no doubt a spectrum of sophistication of self- 
models, but I would distinguish all of them from the human ego. I  
guess I was too quick before when I equated the two. The key  
distinction between a self-model and an ego is the ability 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-29 Thread Bruno Marchal


On 28 May 2015, at 20:12, Terren Suydam wrote:




On Thu, May 28, 2015 at 4:20 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 28 May 2015, at 05:16, Terren Suydam wrote:

Language starts to get in the way here, but what you're suggesting  
is akin to someone who is blind-drunk - they will have no memory of  
their experience, but I think most would say a blind-drunk is  
conscious.


But I think the driving scenario is different in that my conscious  
attention is elsewhere... there's competition for the resource of  
attention. I don't really think I'm conscious of the feeling of the  
floor pressing my feet until I pay attention to it.


My thinking on this is that human consciousness involves a unified/ 
global dynamic, and the unifying thread is the self-model or ego.  
This allows for top-down control of attention. When parts of the  
sensorium (and other aspects of the mind) are not involved or  
included in this global dynamic, there is a significant sense in  
which it does not participate in that human consciousness. This is  
not to say that there is no other consciousness - just that it is  
perhaps of a lower form in a hierarchy of consciousness.


I would highlight that human consciousness is somewhat unique in  
that the ego - a cultural innovation dependent on the development  
of language - is not present in animals. Without that unifying  
thread of ego, I suggest that animal consciousness is not unlike  
our dream consciousness, which is an arena of awareness when the  
thread of our ego dissolves. A visual I have is that in the waking  
state, the ego is a bag that encapsulates all the parts that make  
up our psyche. In dreamtime, the drawstring on the bag loosens and  
the parts float out, and get activated according to whatever  
seemingly random processes that constitute dreams.


In lucid dreams, the ego is restored (i.e. we say to ourselves, I  
am dreaming) - and we regain consciousness.


We regain the ego (perhaps the ego illusion), but as you say  
yourself above, we are conscious in the non-lucid dream too.  
Lucidity might be a relative notion, as we can never be sure to be  
awaken. The false-awakening, very frequent for people trained in  
lucid dreaming, illustrate somehow this phenomena.


Right. My point is not that we aren't conscious in non-lucid dream  
states, but that there is a qualitative difference in consciousness  
between those two states, and that lucid-dream consciousness is much  
closer to waking consciousness than to dream consciousness, almost  
by definition. It's this fact I'm trying to explain by proposing the  
role of the ego in human consciousness.


OK. usually I make that difference between simple universality  
(conscious, but not necessarily self-conscious), and Löbianity (self- 
conscious). It is the difference between Robinson Arithmetic and Peano  
Arithmetic (= RA + the induction axioms).


It is an open problem for me if RA is more or less conscious than PA.  
PA has much stronger cognitive abilities, but this can filter more  
consciousness and leads to more delusion, notably that ego.


I don't insist too much on this, as I am not yet quite  sure. It leads  
to the idea that brains filter consciousness, by hallucinating the  
person.






I make this remark because most of the time I use consciousness in  
its rough general sense, in which animals, dreamers, ... are  
conscious.


Of course... my points are about what kinds of aspects of being  
human might privilege our consciousness, in an attempt to understand  
consciousness better.


OK. I understand.




Then, I am not sure higher mammals have not yet already some ego,  
and self-consciousness, well before language. Language just put the  
ego in evidence, and that allows further reflexive loops, which can  
lead to further illusions and soul falling situation.


Right, one could argue that even insects have some kind of self- 
model. There is no doubt a spectrum of sophistication of self- 
models, but I would distinguish all of them from the human ego. I  
guess I was too quick before when I equated the two. The key  
distinction between a self-model and an ego is the ability to refer  
to oneself as an object - this, and the ability to identify with  
that object, reifies the self model in a way that appears to me to  
be crucial to human consciousness. I don't think this is really  
possible without language.


Probably. But that identification is already a sort of  illusion. It  
is very useful in practice, to survive, when being alive. But the  
truth, including possible afterlives is more complex.





Nor am I sure that our ego dissolves in non-lucid dream, although it  
seems to disappear in the non-REM dreams, and other sleep states.


For me, the key insight I had in trying to describe the difference  
between lucid and non-lucid dreams is the ability to say I am  
dreaming, which is an ego statement. What other explanations could  
account for the difference 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-28 Thread Stathis Papaioannou
On 29 May 2015 at 00:09, Jason Resch jasonre...@gmail.com wrote:


 On Wed, May 27, 2015 at 10:55 PM, Russell Standish li...@hpcoders.com.au
 wrote:

 On Thu, May 28, 2015 at 01:23:20PM +1000, Stathis Papaioannou wrote:
 
  A stroke results in objective and subjective changes, such as the
  inability
  to move a limb, understand language or see. These are gross examples of
  fading qualia. What you are proposing, as I understand it, is that if
  the
  damaged brain is replaced neuron by neuron, function and qualia would be
  restored, but at a certain point when you installed the artificial
  neuron,
  the patient would suddenly lose all consciousness, although to an
  outside
  observer he would seem to be continuing to improve.
 

 Rather than focus on the term fading qualia, to which we seem to
 have derived a different understanding from the same paper, what
 you're saying is that a stroke victim can suffer diminished qualia, ie
 qualia no longer existing where once there was some. Eg no longer
 being able to smell a banana, or gone colour blind, or similar.

 That same person may or may not be aware of the fact that they've lost
 their qualia.

 The we can consider a nonfunctionalist response to the experiment
 where neurons are replaced by functionally equivalent artificial
 replacements. A nonfunctionalist ought to treat this like the stroke
 case - at some point qualia will disappear. If the person notices the
 qualia disappearing, the functionalist would simply say that the
 replacement parts are not functionally equivalent, or that the
 subsitution level was wrong. This is not an interesting case.

 A partial zombie, by defintion will always report that the full qualia
 were present, just as a full zombie would.

 If, however, the person fails to notice the disappearing qualia, a
 functionalist would say that the full set of qualia was being
 experienced, whereas a nonfunctionalist would say that the person
 might be experiencing a reduced set of qualia, even though e reports
 all er faculties being sound. Is this absurd? - no more absurd than the
 notion of a full zombie, I would say.

 But quales, as continua, which I took to be Chalmers' absurdity,
 simply are not implied. Quales exist discretely.



 To be clear, Chalmers didn't say fading qualia were implied, he said if
 consciousness is not present in an artificial brain that it follows that
 during a neuron-by-neuron replacement a biological brain would eventually
 become an artificial brain, which is presumed to have no consciousness.  He
 concludes that one of two things must have happened:

 1. Suddenly disappearing qualia
 2. Fading qualia

 He acknowledges his paper is not a proof of functionalism, but instead just
 shows that the non-functionalist has to swallow at least one of these
 outcomes, furthering the argument for functionalism.

A good summary. I think the argument is even stronger than Chalmers
claims. Fading qualia would mean consciousness did not exist (which
apparently not everyone thinks is absurd), and suddenly disappearing
qualia, while logically possible, is ad hoc, contrary to any
naturalistic explanation of how the brain functions (the artificial
neurons would initially have to support consciousness, then suddenly
destroy the consciousness that would have been there in their absence)
and lead to the conclusion that computationalism is partly true.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-28 Thread Terren Suydam
On Thu, May 28, 2015 at 4:20 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 28 May 2015, at 05:16, Terren Suydam wrote:

 Language starts to get in the way here, but what you're suggesting is akin
 to someone who is blind-drunk - they will have no memory of their
 experience, but I think most would say a blind-drunk is conscious.

 But I think the driving scenario is different in that my conscious
 attention is elsewhere... there's competition for the resource of
 attention. I don't really think I'm conscious of the feeling of the floor
 pressing my feet until I pay attention to it.

 My thinking on this is that human consciousness involves a unified/global
 dynamic, and the unifying thread is the self-model or ego. This allows for
 top-down control of attention. When parts of the sensorium (and other
 aspects of the mind) are not involved or included in this global dynamic,
 there is a significant sense in which it does not participate in that human
 consciousness. This is not to say that there is no other consciousness -
 just that it is perhaps of a lower form in a hierarchy of consciousness.

 I would highlight that human consciousness is somewhat unique in that the
 ego - a cultural innovation dependent on the development of language - is
 not present in animals. Without that unifying thread of ego, I suggest that
 animal consciousness is not unlike our dream consciousness, which is an
 arena of awareness when the thread of our ego dissolves. A visual I have is
 that in the waking state, the ego is a bag that encapsulates all the parts
 that make up our psyche. In dreamtime, the drawstring on the bag loosens
 and the parts float out, and get activated according to whatever seemingly
 random processes that constitute dreams.

 In lucid dreams, the ego is restored (i.e. we say to ourselves, *I* *am*
 dreaming) - and we regain consciousness.


 We regain the ego (perhaps the ego illusion), but as you say yourself
 above, we are conscious in the non-lucid dream too. Lucidity might be a
 relative notion, as we can never be sure to be awaken. The false-awakening,
 very frequent for people trained in lucid dreaming, illustrate somehow this
 phenomena.


Right. My point is not that we aren't conscious in non-lucid dream states,
but that there is a qualitative difference in consciousness between those
two states, and that lucid-dream consciousness is much closer to waking
consciousness than to dream consciousness, almost by definition. It's this
fact I'm trying to explain by proposing the role of the ego in human
consciousness.


 I make this remark because most of the time I use consciousness in its
 rough general sense, in which animals, dreamers, ... are conscious.


Of course... my points are about what kinds of aspects of being human might
privilege our consciousness, in an attempt to understand consciousness
better.


 Then, I am not sure higher mammals have not yet already some ego, and
 self-consciousness, well before language. Language just put the ego in
 evidence, and that allows further reflexive loops, which can lead to
 further illusions and soul falling situation.


Right, one could argue that even insects have some kind of self-model.
There is no doubt a spectrum of sophistication of self-models, but I would
distinguish all of them from the human ego. I guess I was too quick before
when I equated the two. The key distinction between a self-model and an ego
is the ability to refer to oneself as an object - this, and the ability to
*identify* with that object, reifies the self model in a way that appears
to me to be crucial to human consciousness. I don't think this is really
possible without language.


 Nor am I sure that our ego dissolves in non-lucid dream, although it seems
 to disappear in the non-REM dreams, and other sleep states.


For me, the key insight I had in trying to describe the difference between
lucid and non-lucid dreams is the ability to say I am dreaming, which is
an ego statement. What other explanations could account for the difference
between lucid and non-lucid dreams?

Terren



 Bruno



 Terren







 On Wed, May 27, 2015 at 10:10 PM, Jason Resch jasonre...@gmail.com
 wrote:

 Are we any less conscious of as it happens, or perhaps our brains are
 simply not forming as many memories of usual/uneventful tasks.

 Jason


 On Wed, May 27, 2015 at 9:06 PM, Terren Suydam terren.suy...@gmail.com
 wrote:

 In the driving scenario it is clear that computation is involved,
 because all sorts of contingent things can be going on (e.g. dynamics of
 driving among other cars), yet this occurs without crossing the threshold
 of consciousness. Relying on some kind of caching mechanism under such
 circumstances would quickly fail one way or another.

 Terren
 On May 27, 2015 7:38 PM, Pierz pier...@gmail.com wrote:



 On Thursday, May 28, 2015 at 6:06:22 AM UTC+10, Brent wrote:

  On 5/26/2015 10:31 PM, Pierz wrote:

   Where I see lookup tables fail is that they seem to operate 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-28 Thread LizR
On 29 May 2015 at 02:09, Jason Resch jasonre...@gmail.com wrote:

 To be clear, Chalmers didn't say fading qualia were implied, he said if
 consciousness is not present in an artificial brain that it follows that
 during a neuron-by-neuron replacement a biological brain would eventually
 become an artificial brain, which is presumed to have no consciousness.  He
 concludes that one of two things must have happened:

 1. Suddenly disappearing qualia
 2. Fading qualia

 He acknowledges his paper is not a proof of functionalism, but instead
 just shows that the non-functionalist has to swallow at least one of these
 outcomes, furthering the argument for functionalism.


This is assuming such a replacement is possible (in principle). Is it
possible that the brain can't be broken down in that sort of reductionist
way, that you can't replace neurons one at a time even in principle (seems
unlikely, but no one knows for sure. Though I guess quantum entanglement
can't be implicated in this case, if Tegmark is correct abuot decoherence
times in the brain).

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-28 Thread LizR
On 29 May 2015 at 10:52, Stathis Papaioannou stath...@gmail.com wrote:

 A good summary. I think the argument is even stronger than Chalmers
 claims. Fading qualia would mean consciousness did not exist (which
 apparently not everyone thinks is absurd),


Indeed not. Brent implied it in a recent post (but I can't remember exactly
which one it was now). Dennett is also of this opinion. It can probably be
summed up as consciousness is an illusion experienced by an illusory
person.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-28 Thread Stathis Papaioannou
On 28 May 2015 at 13:55, Russell Standish li...@hpcoders.com.au wrote:
 On Thu, May 28, 2015 at 01:23:20PM +1000, Stathis Papaioannou wrote:

 A stroke results in objective and subjective changes, such as the inability
 to move a limb, understand language or see. These are gross examples of
 fading qualia. What you are proposing, as I understand it, is that if the
 damaged brain is replaced neuron by neuron, function and qualia would be
 restored, but at a certain point when you installed the artificial neuron,
 the patient would suddenly lose all consciousness, although to an outside
 observer he would seem to be continuing to improve.


 Rather than focus on the term fading qualia, to which we seem to
 have derived a different understanding from the same paper, what
 you're saying is that a stroke victim can suffer diminished qualia, ie
 qualia no longer existing where once there was some. Eg no longer
 being able to smell a banana, or gone colour blind, or similar.

 That same person may or may not be aware of the fact that they've lost
 their qualia.

We might not notice a small or gradual change in our qualia, such as
fading of the senses as we age. Also, with some neurological
conditions patients develop anosognosia, which is a type of delusional
disorder where they don't recognise they have an illness in spite of
the evidence. But these are not the cases to consider. The cases to
consider are the ones where the subject has an obvious deficit, can
recognise it, describe it, is upset by it.

 The we can consider a nonfunctionalist response to the experiment
 where neurons are replaced by functionally equivalent artificial
 replacements. A nonfunctionalist ought to treat this like the stroke
 case - at some point qualia will disappear. If the person notices the
 qualia disappearing, the functionalist would simply say that the
 replacement parts are not functionally equivalent, or that the
 subsitution level was wrong. This is not an interesting case.

The functionalist and non-functionalist alike will agree that if the
person behaves differently the replacement parts are not functionally
equivalent, by definition. Functionally equivalent here refers only
to observable function - not to subjectivity, which is the point at
issue where functionalist and non-functionalist will disagree. Note
also that the functionalist and non-functionalist may agree that it is
impossible to make a functionally equivalent part using a particular
approach; for example, if the brain utilises non-computable functions
it will be impossible to make a computerised artificial neuron.

 A partial zombie, by defintion will always report that the full qualia
 were present, just as a full zombie would.

 If, however, the person fails to notice the disappearing qualia, a
 functionalist would say that the full set of qualia was being
 experienced, whereas a nonfunctionalist would say that the person
 might be experiencing a reduced set of qualia, even though e reports
 all er faculties being sound. Is this absurd? - no more absurd than the
 notion of a full zombie, I would say.

If, in general rather than in special cases, it is possible to lack
qualia but not notice, that means there is no objective or subjective
difference between having qualia and not having them, which is
equivalent to saying qualia, and consciousness, do not exist.

 But quales, as continua, which I took to be Chalmers' absurdity,
 simply are not implied. Quales exist discretely.

Qualia can disappear continuously or in a piecemeal fashion. For
example, you could have uniformly diminished sensation in a part of
the body or patches of anaesthesia bordering areas of normal
sensation.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-28 Thread Bruno Marchal


On 28 May 2015, at 05:16, Terren Suydam wrote:

Language starts to get in the way here, but what you're suggesting  
is akin to someone who is blind-drunk - they will have no memory of  
their experience, but I think most would say a blind-drunk is  
conscious.


But I think the driving scenario is different in that my conscious  
attention is elsewhere... there's competition for the resource of  
attention. I don't really think I'm conscious of the feeling of the  
floor pressing my feet until I pay attention to it.


My thinking on this is that human consciousness involves a unified/ 
global dynamic, and the unifying thread is the self-model or ego.  
This allows for top-down control of attention. When parts of the  
sensorium (and other aspects of the mind) are not involved or  
included in this global dynamic, there is a significant sense in  
which it does not participate in that human consciousness. This is  
not to say that there is no other consciousness - just that it is  
perhaps of a lower form in a hierarchy of consciousness.


I would highlight that human consciousness is somewhat unique in  
that the ego - a cultural innovation dependent on the development of  
language - is not present in animals. Without that unifying thread  
of ego, I suggest that animal consciousness is not unlike our dream  
consciousness, which is an arena of awareness when the thread of our  
ego dissolves. A visual I have is that in the waking state, the ego  
is a bag that encapsulates all the parts that make up our psyche. In  
dreamtime, the drawstring on the bag loosens and the parts float  
out, and get activated according to whatever seemingly random  
processes that constitute dreams.


In lucid dreams, the ego is restored (i.e. we say to ourselves, I  
am dreaming) - and we regain consciousness.


We regain the ego (perhaps the ego illusion), but as you say yourself  
above, we are conscious in the non-lucid dream too. Lucidity might be  
a relative notion, as we can never be sure to be awaken. The false- 
awakening, very frequent for people trained in lucid dreaming,  
illustrate somehow this phenomena.


I make this remark because most of the time I use consciousness in  
its rough general sense, in which animals, dreamers, ... are conscious.


Then, I am not sure higher mammals have not yet already some ego, and  
self-consciousness, well before language. Language just put the ego in  
evidence, and that allows further reflexive loops, which can lead to  
further illusions and soul falling situation.
Nor am I sure that our ego dissolves in non-lucid dream, although it  
seems to disappear in the non-REM dreams, and other sleep states.


Bruno




Terren







On Wed, May 27, 2015 at 10:10 PM, Jason Resch jasonre...@gmail.com  
wrote:
Are we any less conscious of as it happens, or perhaps our brains  
are simply not forming as many memories of usual/uneventful tasks.


Jason


On Wed, May 27, 2015 at 9:06 PM, Terren Suydam terren.suy...@gmail.com 
 wrote:
In the driving scenario it is clear that computation is involved,  
because all sorts of contingent things can be going on (e.g.  
dynamics of driving among other cars), yet this occurs without  
crossing the threshold of consciousness. Relying on some kind of  
caching mechanism under such circumstances would quickly fail one  
way or another.


Terren

On May 27, 2015 7:38 PM, Pierz pier...@gmail.com wrote:


On Thursday, May 28, 2015 at 6:06:22 AM UTC+10, Brent wrote:
On 5/26/2015 10:31 PM, Pierz wrote:
Where I see lookup tables fail is that they seem to operate above  
the probable necessary substation level. (Despite having the same  
inputs/outputs at the higher levels).


But your memoization example still makes a good point - namely that  
some computations can be bypassed in favour of recordings, yet  
presumably this doesn't lead to fading qualia. We don't need  
anything as silly as a gigantic lookup table of all possible  
responses. We only need to acknowledge that we can store the  
results of recordings of computations we've already completed, and  
that this should not result in any strange degradation of  
consciousness.


Isn't that what allows me to drive home from work without being  
conscious of it?


People keep making this point, which is one that I myself made in  
the past - and I believe you argued with me at the time, saying that  
it's not clear that the mechanism for automating brain functions is  
anything like the same as caching the results of a computation. I  
think that objection is actually fair enough. With automated actions  
it's not clear that the computations aren't being carried out any  
more, just that they no longer require conscious attention because  
the neuronal pathways for those computations have become  
sufficiently reinforced that they no longer require concentration. I  
think this model (automated computation rather than cached  
computation) fits our experience of this phenomenon. Sometimes I  

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-28 Thread Jason Resch
On Wed, May 27, 2015 at 10:55 PM, Russell Standish li...@hpcoders.com.au
wrote:

 On Thu, May 28, 2015 at 01:23:20PM +1000, Stathis Papaioannou wrote:
 
  A stroke results in objective and subjective changes, such as the
 inability
  to move a limb, understand language or see. These are gross examples of
  fading qualia. What you are proposing, as I understand it, is that if
 the
  damaged brain is replaced neuron by neuron, function and qualia would be
  restored, but at a certain point when you installed the artificial
 neuron,
  the patient would suddenly lose all consciousness, although to an outside
  observer he would seem to be continuing to improve.
 

 Rather than focus on the term fading qualia, to which we seem to
 have derived a different understanding from the same paper, what
 you're saying is that a stroke victim can suffer diminished qualia, ie
 qualia no longer existing where once there was some. Eg no longer
 being able to smell a banana, or gone colour blind, or similar.

 That same person may or may not be aware of the fact that they've lost
 their qualia.

 The we can consider a nonfunctionalist response to the experiment
 where neurons are replaced by functionally equivalent artificial
 replacements. A nonfunctionalist ought to treat this like the stroke
 case - at some point qualia will disappear. If the person notices the
 qualia disappearing, the functionalist would simply say that the
 replacement parts are not functionally equivalent, or that the
 subsitution level was wrong. This is not an interesting case.

 A partial zombie, by defintion will always report that the full qualia
 were present, just as a full zombie would.

 If, however, the person fails to notice the disappearing qualia, a
 functionalist would say that the full set of qualia was being
 experienced, whereas a nonfunctionalist would say that the person
 might be experiencing a reduced set of qualia, even though e reports
 all er faculties being sound. Is this absurd? - no more absurd than the
 notion of a full zombie, I would say.

 But quales, as continua, which I took to be Chalmers' absurdity,
 simply are not implied. Quales exist discretely.



To be clear, Chalmers didn't say fading qualia were implied, he said if
consciousness is not present in an artificial brain that it follows that
during a neuron-by-neuron replacement a biological brain would eventually
become an artificial brain, which is presumed to have no consciousness.  He
concludes that one of two things must have happened:

1. Suddenly disappearing qualia
2. Fading qualia

He acknowledges his paper is not a proof of functionalism, but instead just
shows that the non-functionalist has to swallow at least one of these
outcomes, furthering the argument for functionalism.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Bruno Marchal


On 27 May 2015, at 03:27, Jason Resch wrote:




On Mon, May 25, 2015 at 1:46 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 25 May 2015, at 02:06, Jason Resch wrote:




On Sun, May 24, 2015 at 3:52 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 23 May 2015, at 17:07, Jason Resch wrote:




On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal  
marc...@ulb.ac.be wrote:


On 19 May 2015, at 15:53, Jason Resch wrote:




On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou stath...@gmail.com 
 wrote:

On 19 May 2015 at 14:45, Jason Resch jasonre...@gmail.com wrote:


 On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou stath...@gmail.com 


 wrote:

 On 19 May 2015 at 11:02, Jason Resch jasonre...@gmail.com  
wrote:






 The consciousness (if there is one) is the consciousness of the  
person, incarnated in the program. It is not the consciousness of  
the low level processor, no more than the physicality which  
supports the ant and the table.


Again, with comp there is never any problem with all of this. The  
consciousness is an immaterial attribute of an immaterial program/ 
machine's soul, which is defined exclusively by a class of true  
number relations.



While I can see certain very complex number relations leading to a  
human-level consciousness, I don't find that kind of complexity  
present in the relations defining a lookup table. Especially  
because any meaning or interpretation of the output depends on the  
person querying it, there's no self-contained understanding of the  
program's own output.



Why? When you get the output, you need to re-entry it, and ask the  
look-up table again. It will works only because we suppose armies  
of daemon having already done the computations.


Determining the answers the first time might require computations  
that lead to consciousness, but later invocations of the stored  
memory in the lookup table doesn't lead to those original  
computations being performed again. It is just a memory access.


No, because you agree that the system remains counterfactually  
correct, so it is not just a memory access, there is a conditional  
which is satisfied by the process, and indeed, if the loop-up table  
is miniaturized and put in the brain, with an army of super-fast  
little daemons managing it in real time, the person will pass the  
infinite Turing test, so, why not bet (correctly here by  
construction) that it manifests the correct platonic person?


Again, it just mean that only person are conscious, not processes,  
nor computations, programs, machines, or anything 3p describable.
That is what is given with the  p hypostases. They describe the  
logic of something not nameable by the machine itself, but which  
directly concerns the machine selves, and its consistent extensions.


The person is defined by its truth and beliefs and relation in  
between truth and beliefs, from the different person points of view  
(defined in the Theaetetus' manner ([]p, []p  p, etc.).


Bruno



But are not computations something different beyond mere inputs and  
outputs of functions?


Yes.



It is like Putnam's objection to functionalism: there are multiple  
ways of realizing each function, and they are not necessarily  
equivalent. I think once one admits that the inputs and outputs are  
not all that matters, this leads to abandoning functionalism for  
computationalism, which also necessitates the concept of a  
substitution level.


OK. But Putnam's original functionalism was just fuzzy on this. It  
assumes some high level of substitution, around the neurons. It is not  
the high level function of the person seen from outside (that would be  
behaviorism).







Where I see lookup tables fail is that they seem to operate above  
the probable necessary substation level. (Despite having the same  
inputs/outputs at the higher levels).


I thought so. But then we agree. It is just that if they have the same  
input-output, for a long period of time, it means that the subst level  
is plausibly correct.


We might have to define formally look-up table. My attempt to do so  
led me to redefine the notion of Turing machine.


Bruno





Jason


--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Bruno Marchal


On 26 May 2015, at 22:11, meekerdb wrote:


On 5/26/2015 1:29 AM, Bruno Marchal wrote:


On 25 May 2015, at 22:49, meekerdb wrote:


On 5/25/2015 5:16 AM, Pierz wrote:



On Monday, May 25, 2015 at 4:58:53 AM UTC+10, Brent wrote:
On 5/24/2015 4:09 AM, Pierz wrote:



On Sunday, May 24, 2015 at 4:47:12 PM UTC+10, Jason wrote:


On Sun, May 24, 2015 at 12:40 AM, Pierz pie...@gmail.com wrote:


On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote:


On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal  
mar...@ulb.ac.be wrote:


On 19 May 2015, at 15:53, Jason Resch wrote:




On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou  
stat...@gmail.com wrote:

On 19 May 2015 at 14:45, Jason Resch jason...@gmail.com wrote:


 On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou  
stat...@gmail.com

 wrote:


 On 19 May 2015 at 11:02, Jason Resch jason...@gmail.com  
wrote:


  I

SNIP


Bruno's theory is that consciousness and the physical world are  
all just relations between numbers and things like brains,  
computers, recordings, and lookup tables and just ways for  
manifesting consciousness and 2 and II are ways of manifesting the  
number two.


Except that consciousness is not an illusion. The physical universe  
is only an appearance from inside. It is phenomenological. Comp  
explains why our embedding in arithmetic makes us believe  
intuitively the contrary.


I only had to look back one post to find:

There is no universe, if we are machine. It is only a stable and  
persistent illusion (assuming mechanism)Bruno Marchal


Stable and persistent illusion is what is generally referred to as  
the world.


I prefer to call that a dream. But It is OK.



The existence of other people is only a stable and persistent  
illusion - yet you accept them as real.


Yes, but non fundamentally real. I don't have to assume them at the  
start. The physical universe(s) is real, too. But again,n I argue that  
comp makes it non primitive, but emerging from the dreams/computations- 
seen-from-inside.


Bruno




Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Bruno Marchal


On 27 May 2015, at 09:00, Jason Resch wrote:




On Wed, May 27, 2015 at 12:31 AM, Pierz pier...@gmail.com wrote:


On Wednesday, May 27, 2015 at 11:27:26 AM UTC+10, Jason wrote:


On Mon, May 25, 2015 at 1:46 PM, Bruno Marchal mar...@ulb.ac.be  
wrote:


On 25 May 2015, at 02:06, Jason Resch wrote:




On Sun, May 24, 2015 at 3:52 AM, Bruno Marchal mar...@ulb.ac.be  
wrote:


On 23 May 2015, at 17:07, Jason Resch wrote:




On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal mar...@ulb.ac.be  
wrote:


On 19 May 2015, at 15:53, Jason Resch wrote:




On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou  
stat...@gmail.com wrote:

On 19 May 2015 at 14:45, Jason Resch jason...@gmail.com wrote:


 On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou  
stat...@gmail.com

 wrote:


 On 19 May 2015 at 11:02, Jason Resch jason...@gmail.com wrote:

  I think you're not taking into account the level of the  
functional
  substitution. Of course functionally equivalent silicon and  
functionally
  equivalent neurons can (under functionalism) both  
instantiate the same
  consciousness. But a calculator computing 2+3 cannot  
substitute for a

  human
  brain computing 2+3 and produce the same consciousness.

 In a gradual replacement the substitution must obviously be at  
a level
 sufficient to maintain the function of the whole brain.  
Sticking a

 calculator in it won't work.

  Do you think a Blockhead that was functionally equivalent  
to you (it

  could
  fool all your friends and family in a Turing test scenario  
into thinking

  it
  was intact you) would be conscious in the same way as you?

 Not necessarily, just as an actor may not be conscious in the  
same way
 as me. But I suspect the Blockhead would be conscious; the  
intuition
 that a lookup table can't be conscious is like the intuition  
that an

 electric circuit can't be conscious.


 I don't see an equivalency between those intuitions. A lookup  
table has a
 bounded and very low degree of computational complexity: all  
answers to all

 queries are answered in constant time.

 While the table itself may have an arbitrarily high information  
content,

 what in the software of the lookup table program is there to
 appreciate/understand/know that information?

Understanding emerges from the fact that the lookup table is  
immensely

large. It could be wrong, but I don't think it is obviously less
plausible than understanding emerging from a Turing machine made of
tin cans.



The lookup table is intelligent or at least offers the appearance  
of intelligence, but it makes the maximum possible advantage of  
the space-time trade off: http://en.wikipedia.org/wiki/Space– 
time_tradeoff


The tin-can Turing machine is unbounded in its potential  
computational complexity, there's no reason to be a bio- or  
silico-chauvinist against it. However, by definition, a lookup  
table has near zero computational complexity, no retained state.


But it is counterfactually correct on a large range spectrum. Of  
course, it has to be infinite to be genuinely counterfactual- 
correct.



But the structure of the counterfactuals is identical regardless  
of the inputs and outputs in its lookup table. If you replaced all  
of its outputs with random strings, would that change its  
consciousness? What if there existed a special decoding book,  
which was a one-time-pad that could decode its random answers?  
Would the existence of this book make it more conscious than if  
this book did not exist? If there is zero information content in  
the outputs returned by the lookup table it might as well return  
all X characters as its response to any query, but then would  
any program that just returns a string of X's be conscious?


A lookup table might have some primitive conscious, but I think  
any consciousness it has would be more or less the same regardless  
of the number of entries within that lookup table. With more  
entries, its information content grows, but it's capacity to  
process, interpret, or understand that information remains constant.


You can emulate the brain of Einstein with a (ridiculously  large)  
look-up table, assuming you are ridiculously patient---or we slow  
down your own brain so that you are as slow as einstein.

Is that incarnation a zombie?

Again, with comp, all incarnations are zombie, because bodies do  
not think. It is the abstract person which thinks, and in this case  
Einstein will still be defined by the simplest normal  
computations, which here, and only here, have taken the form of  
that unplausible giant Einstein look-up table emulation at the  
right level.



That last bit is the part I have difficulty with. How can a a  
single call to a lookup table ever be at the right level.


Actually, the Turing machine formalism is a type of look-up table:  
if you are scanning input i (big numbers describing all your current  
sensitive entries,  while you are in state  

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Bruno Marchal


On 27 May 2015, at 07:31, Pierz wrote:




On Wednesday, May 27, 2015 at 11:27:26 AM UTC+10, Jason wrote:


On Mon, May 25, 2015 at 1:46 PM, Bruno Marchal mar...@ulb.ac.be  
wrote:


On 25 May 2015, at 02:06, Jason Resch wrote:




On Sun, May 24, 2015 at 3:52 AM, Bruno Marchal mar...@ulb.ac.be  
wrote:


On 23 May 2015, at 17:07, Jason Resch wrote:




On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal mar...@ulb.ac.be  
wrote:


On 19 May 2015, at 15:53, Jason Resch wrote:




On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou  
stat...@gmail.com wrote:

On 19 May 2015 at 14:45, Jason Resch jason...@gmail.com wrote:


 On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou  
stat...@gmail.com

 wrote:

 On 19 May 2015 at 11:02, Jason Resch jason...@gmail.com wrote:

  I think you're not taking into account the level of the  
functional
  substitution. Of course functionally equivalent silicon and  
functionally
  equivalent neurons can (under functionalism) both  
instantiate the same
  consciousness. But a calculator computing 2+3 cannot  
substitute for a

  human
  brain computing 2+3 and produce the same consciousness.

 In a gradual replacement the substitution must obviously be at  
a level
 sufficient to maintain the function of the whole brain.  
Sticking a

 calculator in it won't work.

  Do you think a Blockhead that was functionally equivalent  
to you (it

  could
  fool all your friends and family in a Turing test scenario  
into thinking

  it
  was intact you) would be conscious in the same way as you?

 Not necessarily, just as an actor may not be conscious in the  
same way
 as me. But I suspect the Blockhead would be conscious; the  
intuition
 that a lookup table can't be conscious is like the intuition  
that an

 electric circuit can't be conscious.


 I don't see an equivalency between those intuitions. A lookup  
table has a
 bounded and very low degree of computational complexity: all  
answers to all

 queries are answered in constant time.

 While the table itself may have an arbitrarily high information  
content,

 what in the software of the lookup table program is there to
 appreciate/understand/know that information?

Understanding emerges from the fact that the lookup table is  
immensely

large. It could be wrong, but I don't think it is obviously less
plausible than understanding emerging from a Turing machine made of
tin cans.



The lookup table is intelligent or at least offers the appearance  
of intelligence, but it makes the maximum possible advantage of  
the space-time trade off: http://en.wikipedia.org/wiki/Space– 
time_tradeoff


The tin-can Turing machine is unbounded in its potential  
computational complexity, there's no reason to be a bio- or  
silico-chauvinist against it. However, by definition, a lookup  
table has near zero computational complexity, no retained state.


But it is counterfactually correct on a large range spectrum. Of  
course, it has to be infinite to be genuinely counterfactual- 
correct.



But the structure of the counterfactuals is identical regardless  
of the inputs and outputs in its lookup table. If you replaced all  
of its outputs with random strings, would that change its  
consciousness? What if there existed a special decoding book,  
which was a one-time-pad that could decode its random answers?  
Would the existence of this book make it more conscious than if  
this book did not exist? If there is zero information content in  
the outputs returned by the lookup table it might as well return  
all X characters as its response to any query, but then would  
any program that just returns a string of X's be conscious?


A lookup table might have some primitive conscious, but I think  
any consciousness it has would be more or less the same regardless  
of the number of entries within that lookup table. With more  
entries, its information content grows, but it's capacity to  
process, interpret, or understand that information remains constant.


You can emulate the brain of Einstein with a (ridiculously  large)  
look-up table, assuming you are ridiculously patient---or we slow  
down your own brain so that you are as slow as einstein.

Is that incarnation a zombie?

Again, with comp, all incarnations are zombie, because bodies do  
not think. It is the abstract person which thinks, and in this case  
Einstein will still be defined by the simplest normal  
computations, which here, and only here, have taken the form of  
that unplausible giant Einstein look-up table emulation at the  
right level.



That last bit is the part I have difficulty with. How can a a  
single call to a lookup table ever be at the right level.


Actually, the Turing machine formalism is a type of look-up table:  
if you are scanning input i (big numbers describing all your current  
sensitive entries,  while you are in state  
q_169757243685173427379910054234647572376400064994542424646334345787910190034 
676754100687. (big number describing 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Stathis Papaioannou
On Wednesday, May 27, 2015, Russell Standish li...@hpcoders.com.au wrote:

 On Tue, May 26, 2015 at 08:17:39PM -0500, Jason Resch wrote:
   Not at all. My suggestion is that there wouldn't be any partial
   zombies, just normally functioning consciousness, and full zombies,
   with respect to Chalmers fading qualia experiment, due to network
 effects.
  
 
  Doesn't this lead to the problem of suddenly disappearing qualia, which
  Chalmers describes?  What do you think about Chalmers's objections to
  suddenly disappearing qualia?
 
 
  
   Obviously, with functionalism (and computationalism), consciousness is
   retained throughout, and no zombies appear. Chalmers was trying to
   show an absurdity with non-functionalism, and I don't think it works,
   except insofar as full zombies are absurd.
  
 
  Why do you think it fails? Because you can accept the possibility of
  suddenly disappearing qualia?
 
  Jason
 

 It fails, because it relies on the absurdity of partial zombies, which
 I don't think non-functionalism implies.

 We can remove functionalism from the equation by just considering what
 happens as we remove neurons from a brain. I would seriously expect
 that the qualia will switch off one-by-one as the necessary network
 connections are broken, rather than fading into nothing as Chalmers
 supposes.


But that is not what happens when brain tissue is destroyed, as in a
stroke. Qualia do actually fade, and entire sensory and
cognitive modalities fade, leaving others intact. What you are proposing is
that with the replacement neurons the qualia do not fade - so to this
extent the replacement neurons support consciousness; then, at a certain
point, all the qualia suddenly disappear, whereas if the neurons had simply
been replaced rather than destroyed the qualia would have faded and overall
consciousness maintained.


 Therefore in the fading qualia setup, a non-functionalist ought to
 think the same way - qualia switching off one-by-one until complete
 zombiehood is achieved, well pior to complete replacement of the brain
 by it's functional equivalents.

 Cheers

 --


 
 Prof Russell Standish  Phone 0425 253119 (mobile)
 Principal, High Performance Coders
 Visiting Professor of Mathematics  hpco...@hpcoders.com.au
 javascript:;
 University of New South Wales  http://www.hpcoders.com.au

 

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com javascript:;.
 To post to this group, send email to everything-list@googlegroups.com
 javascript:;.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Stathis Papaioannou
On Wednesday, May 27, 2015, Stathis Papaioannou stath...@gmail.com wrote:



 On Wednesday, May 27, 2015, Russell Standish li...@hpcoders.com.au
 javascript:_e(%7B%7D,'cvml','li...@hpcoders.com.au'); wrote:

 On Tue, May 26, 2015 at 08:17:39PM -0500, Jason Resch wrote:
   Not at all. My suggestion is that there wouldn't be any partial
   zombies, just normally functioning consciousness, and full zombies,
   with respect to Chalmers fading qualia experiment, due to network
 effects.
  
 
  Doesn't this lead to the problem of suddenly disappearing qualia, which
  Chalmers describes?  What do you think about Chalmers's objections to
  suddenly disappearing qualia?
 
 
  
   Obviously, with functionalism (and computationalism), consciousness is
   retained throughout, and no zombies appear. Chalmers was trying to
   show an absurdity with non-functionalism, and I don't think it works,
   except insofar as full zombies are absurd.
  
 
  Why do you think it fails? Because you can accept the possibility of
  suddenly disappearing qualia?
 
  Jason
 

 It fails, because it relies on the absurdity of partial zombies, which
 I don't think non-functionalism implies.

 We can remove functionalism from the equation by just considering what
 happens as we remove neurons from a brain. I would seriously expect
 that the qualia will switch off one-by-one as the necessary network
 connections are broken, rather than fading into nothing as Chalmers
 supposes.


 But that is not what happens when brain tissue is destroyed, as in a
 stroke. Qualia do actually fade, and entire sensory and
 cognitive modalities fade, leaving others intact. What you are proposing
 is that with the replacement neurons the qualia do not fade - so to this
 extent the replacement neurons support consciousness; then, at a certain
 point, all the qualia suddenly disappear, whereas if the neurons had simply
 been replaced rather than destroyed the qualia would have faded and overall
 consciousness maintained.


I meant to say here if the neurons had simply been destroyed rather than
replaced the qualia would have faded...


 Therefore in the fading qualia setup, a non-functionalist ought to
 think the same way - qualia switching off one-by-one until complete
 zombiehood is achieved, well pior to complete replacement of the brain
 by it's functional equivalents.

 Cheers

 --


 
 Prof Russell Standish  Phone 0425 253119 (mobile)
 Principal, High Performance Coders
 Visiting Professor of Mathematics  hpco...@hpcoders.com.au
 University of New South Wales  http://www.hpcoders.com.au

 

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



 --
 Stathis Papaioannou



-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread meekerdb

On 5/26/2015 10:31 PM, Pierz wrote:


Where I see lookup tables fail is that they seem to operate above the 
probable
necessary substation level. (Despite having the same inputs/outputs at the 
higher
levels).

But your memoization example still makes a good point - namely that some computations 
can be bypassed in favour of recordings, yet presumably this doesn't lead to fading 
qualia. We don't need anything as silly as a gigantic lookup table of all possible 
responses. We only need to acknowledge that we can store the results of recordings of 
computations we've already completed, and that this should not result in any strange 
degradation of consciousness.


Isn't that what allows me to drive home from work without being conscious of it?

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Jason Resch
On Wed, May 27, 2015 at 12:31 AM, Pierz pier...@gmail.com wrote:



 On Wednesday, May 27, 2015 at 11:27:26 AM UTC+10, Jason wrote:



 On Mon, May 25, 2015 at 1:46 PM, Bruno Marchal mar...@ulb.ac.be wrote:


 On 25 May 2015, at 02:06, Jason Resch wrote:



 On Sun, May 24, 2015 at 3:52 AM, Bruno Marchal mar...@ulb.ac.be wrote:


 On 23 May 2015, at 17:07, Jason Resch wrote:



 On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal mar...@ulb.ac.be
 wrote:


 On 19 May 2015, at 15:53, Jason Resch wrote:



 On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou 
 stat...@gmail.com wrote:

 On 19 May 2015 at 14:45, Jason Resch jason...@gmail.com wrote:
 
 
  On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou 
 stat...@gmail.com
  wrote:

 
  On 19 May 2015 at 11:02, Jason Resch jason...@gmail.com wrote:
 
   I think you're not taking into account the level of the
 functional
   substitution. Of course functionally equivalent silicon and
 functionally
   equivalent neurons can (under functionalism) both instantiate
 the same
   consciousness. But a calculator computing 2+3 cannot substitute
 for a
   human
   brain computing 2+3 and produce the same consciousness.
 
  In a gradual replacement the substitution must obviously be at a
 level
  sufficient to maintain the function of the whole brain. Sticking a
  calculator in it won't work.
 
   Do you think a Blockhead that was functionally equivalent to
 you (it
   could
   fool all your friends and family in a Turing test scenario into
 thinking
   it
   was intact you) would be conscious in the same way as you?
 
  Not necessarily, just as an actor may not be conscious in the same
 way
  as me. But I suspect the Blockhead would be conscious; the
 intuition
  that a lookup table can't be conscious is like the intuition that
 an
  electric circuit can't be conscious.
 
 
  I don't see an equivalency between those intuitions. A lookup table
 has a
  bounded and very low degree of computational complexity: all
 answers to all
  queries are answered in constant time.
 
  While the table itself may have an arbitrarily high information
 content,
  what in the software of the lookup table program is there to
  appreciate/understand/know that information?

 Understanding emerges from the fact that the lookup table is immensely
 large. It could be wrong, but I don't think it is obviously less
 plausible than understanding emerging from a Turing machine made of
 tin cans.



 The lookup table is intelligent or at least offers the appearance of
 intelligence, but it makes the maximum possible advantage of the 
 space-time
 trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff

 The tin-can Turing machine is unbounded in its potential computational
 complexity, there's no reason to be a bio- or silico-chauvinist against 
 it.
 However, by definition, a lookup table has near zero computational
 complexity, no retained state.


 But it is counterfactually correct on a large range spectrum. Of
 course, it has to be infinite to be genuinely counterfactual-correct.


 But the structure of the counterfactuals is identical regardless of the
 inputs and outputs in its lookup table. If you replaced all of its outputs
 with random strings, would that change its consciousness? What if there
 existed a special decoding book, which was a one-time-pad that could decode
 its random answers? Would the existence of this book make it more conscious
 than if this book did not exist? If there is zero information content in
 the outputs returned by the lookup table it might as well return all X
 characters as its response to any query, but then would any program that
 just returns a string of X's be conscious?

 A lookup table might have some primitive conscious, but I think any
 consciousness it has would be more or less the same regardless of the
 number of entries within that lookup table. With more entries, its
 information content grows, but it's capacity to process, interpret, or
 understand that information remains constant.


 You can emulate the brain of Einstein with a (ridiculously  large)
 look-up table, assuming you are ridiculously patient---or we slow down your
 own brain so that you are as slow as einstein.
 Is that incarnation a zombie?

 Again, with comp, all incarnations are zombie, because bodies do not
 think. It is the abstract person which thinks, and in this case Einstein
 will still be defined by the simplest normal computations, which here,
 and only here, have taken the form of that unplausible giant Einstein
 look-up table emulation at the right level.


 That last bit is the part I have difficulty with. How can a a single
 call to a lookup table ever be at the right level.


 Actually, the Turing machine formalism is a type of look-up table: if
 you are scanning input i (big numbers describing all your current sensitive
 entries,  while you are in state
 q_169757243685173427379910054234647572376400064994542424646334345787910190034676754100687.
 (big number 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Terren Suydam
Language starts to get in the way here, but what you're suggesting is akin
to someone who is blind-drunk - they will have no memory of their
experience, but I think most would say a blind-drunk is conscious.

But I think the driving scenario is different in that my conscious
attention is elsewhere... there's competition for the resource of
attention. I don't really think I'm conscious of the feeling of the floor
pressing my feet until I pay attention to it.

My thinking on this is that human consciousness involves a unified/global
dynamic, and the unifying thread is the self-model or ego. This allows for
top-down control of attention. When parts of the sensorium (and other
aspects of the mind) are not involved or included in this global dynamic,
there is a significant sense in which it does not participate in that human
consciousness. This is not to say that there is no other consciousness -
just that it is perhaps of a lower form in a hierarchy of consciousness.

I would highlight that human consciousness is somewhat unique in that the
ego - a cultural innovation dependent on the development of language - is
not present in animals. Without that unifying thread of ego, I suggest that
animal consciousness is not unlike our dream consciousness, which is an
arena of awareness when the thread of our ego dissolves. A visual I have is
that in the waking state, the ego is a bag that encapsulates all the parts
that make up our psyche. In dreamtime, the drawstring on the bag loosens
and the parts float out, and get activated according to whatever seemingly
random processes that constitute dreams.

In lucid dreams, the ego is restored (i.e. we say to ourselves, *I* *am*
dreaming) - and we regain consciousness.

Terren







On Wed, May 27, 2015 at 10:10 PM, Jason Resch jasonre...@gmail.com wrote:

 Are we any less conscious of as it happens, or perhaps our brains are
 simply not forming as many memories of usual/uneventful tasks.

 Jason


 On Wed, May 27, 2015 at 9:06 PM, Terren Suydam terren.suy...@gmail.com
 wrote:

 In the driving scenario it is clear that computation is involved, because
 all sorts of contingent things can be going on (e.g. dynamics of driving
 among other cars), yet this occurs without crossing the threshold of
 consciousness. Relying on some kind of caching mechanism under such
 circumstances would quickly fail one way or another.

 Terren
 On May 27, 2015 7:38 PM, Pierz pier...@gmail.com wrote:



 On Thursday, May 28, 2015 at 6:06:22 AM UTC+10, Brent wrote:

  On 5/26/2015 10:31 PM, Pierz wrote:

   Where I see lookup tables fail is that they seem to operate above
 the probable necessary substation level. (Despite having the same
 inputs/outputs at the higher levels).

But your memoization example still makes a good point - namely
 that some computations can be bypassed in favour of recordings, yet
 presumably this doesn't lead to fading qualia. We don't need anything as
 silly as a gigantic lookup table of all possible responses. We only need to
 acknowledge that we can store the results of recordings of computations
 we've already completed, and that this should not result in any strange
 degradation of consciousness.


 Isn't that what allows me to drive home from work without being
 conscious of it?


 People keep making this point, which is one that I myself made in the
 past - and I believe you argued with me at the time, saying that it's not
 clear that the mechanism for automating brain functions is anything like
 the same as caching the results of a computation. I think that objection is
 actually fair enough. With automated actions it's not clear that the
 computations aren't being carried out any more, just that they no longer
 require conscious attention because the neuronal pathways for those
 computations have become sufficiently reinforced that they no longer
 require concentration. I think this model (automated computation rather
 than cached computation) fits our experience of this phenomenon. Sometimes
 I suspect we're really talking out of our proverbial arses with these
 speculations as we still have so little idea about how the brain works. It
 may be a computer in the sense that it is Turing emulable, but then we talk
 as if it were squishy laptop or something, and that analogy can be
 misleading in many ways. For example, our memories are nothing like RAM.
 They are distributed like a hologram, constructive and fuzzy, whereas
 computer memory is localised, passive and accurate to the bit. I'm probably
 guilty of the same over-zealous computationalism with my lookup table
 analogy above, but I was thinking more of an AI and the in-principle point
 that cached computation results may be employed at a fine grained level. I
 would continue to insist that it is meaningless to say that a brain that
 employs cached results of computations is a zombie to the extent that it
 does so, because it is meaningless to speak of the when of qualia. (You
 never replied to my 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Russell Standish
On Wed, May 27, 2015 at 08:22:58PM -0500, Jason Resch wrote:
 On Wed, May 27, 2015 at 2:17 AM, Russell Standish li...@hpcoders.com.au
 wrote:
 
  On Tue, May 26, 2015 at 08:17:39PM -0500, Jason Resch wrote:
Not at all. My suggestion is that there wouldn't be any partial
zombies, just normally functioning consciousness, and full zombies,
with respect to Chalmers fading qualia experiment, due to network
  effects.
   
  
   Doesn't this lead to the problem of suddenly disappearing qualia, which
   Chalmers describes?  What do you think about Chalmers's objections to
   suddenly disappearing qualia?
  
  
   
Obviously, with functionalism (and computationalism), consciousness is
retained throughout, and no zombies appear. Chalmers was trying to
show an absurdity with non-functionalism, and I don't think it works,
except insofar as full zombies are absurd.
   
  
   Why do you think it fails? Because you can accept the possibility of
   suddenly disappearing qualia?
  
   Jason
  
 
  It fails, because it relies on the absurdity of partial zombies, which
  I don't think non-functionalism implies.
 
 
 So are you saying it fails because you think partial zombies are not
 absurd, but rather are possible?

No, it fails because non-functionlism doesn't imply their
existence. The absurdity or not of partial zombies is irrelevant.

 
 
 
  We can remove functionalism from the equation by just considering what
  happens as we remove neurons from a brain. I would seriously expect
  that the qualia will switch off one-by-one as the necessary network
  connections are broken, rather than fading into nothing as Chalmers
  supposes.
 
 
 This is the suddenly disappearing qualia which is the other of the two
 possibilities Chalmers suggests for non-functionalism, is it not?
 

Yes.

 
 
  Therefore in the fading qualia setup, a non-functionalist ought to
  think the same way - qualia switching off one-by-one until complete
  zombiehood is achieved, well pior to complete replacement of the brain
  by it's functional equivalents.
 
 
 Do you think we could we interview the person after the fact about at what
 point their qualia disappeared?
 

No - because if we could, then they're not a (partial-)zombie.

 Jason
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Jason Resch
Are we any less conscious of as it happens, or perhaps our brains are
simply not forming as many memories of usual/uneventful tasks.

Jason

On Wed, May 27, 2015 at 9:06 PM, Terren Suydam terren.suy...@gmail.com
wrote:

 In the driving scenario it is clear that computation is involved, because
 all sorts of contingent things can be going on (e.g. dynamics of driving
 among other cars), yet this occurs without crossing the threshold of
 consciousness. Relying on some kind of caching mechanism under such
 circumstances would quickly fail one way or another.

 Terren
 On May 27, 2015 7:38 PM, Pierz pier...@gmail.com wrote:



 On Thursday, May 28, 2015 at 6:06:22 AM UTC+10, Brent wrote:

  On 5/26/2015 10:31 PM, Pierz wrote:

   Where I see lookup tables fail is that they seem to operate above the
 probable necessary substation level. (Despite having the same
 inputs/outputs at the higher levels).

But your memoization example still makes a good point - namely that
 some computations can be bypassed in favour of recordings, yet presumably
 this doesn't lead to fading qualia. We don't need anything as silly as a
 gigantic lookup table of all possible responses. We only need to
 acknowledge that we can store the results of recordings of computations
 we've already completed, and that this should not result in any strange
 degradation of consciousness.


 Isn't that what allows me to drive home from work without being
 conscious of it?


 People keep making this point, which is one that I myself made in the
 past - and I believe you argued with me at the time, saying that it's not
 clear that the mechanism for automating brain functions is anything like
 the same as caching the results of a computation. I think that objection is
 actually fair enough. With automated actions it's not clear that the
 computations aren't being carried out any more, just that they no longer
 require conscious attention because the neuronal pathways for those
 computations have become sufficiently reinforced that they no longer
 require concentration. I think this model (automated computation rather
 than cached computation) fits our experience of this phenomenon. Sometimes
 I suspect we're really talking out of our proverbial arses with these
 speculations as we still have so little idea about how the brain works. It
 may be a computer in the sense that it is Turing emulable, but then we talk
 as if it were squishy laptop or something, and that analogy can be
 misleading in many ways. For example, our memories are nothing like RAM.
 They are distributed like a hologram, constructive and fuzzy, whereas
 computer memory is localised, passive and accurate to the bit. I'm probably
 guilty of the same over-zealous computationalism with my lookup table
 analogy above, but I was thinking more of an AI and the in-principle point
 that cached computation results may be employed at a fine grained level. I
 would continue to insist that it is meaningless to say that a brain that
 employs cached results of computations is a zombie to the extent that it
 does so, because it is meaningless to speak of the when of qualia. (You
 never replied to my argument about poking a recorded Einstein with a stick,
 which I think makes a compelling case for this.) We have to rigorously
 divide the subjective and the objective.


 Brent

  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.

  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Stathis Papaioannou
On Thursday, May 28, 2015, Russell Standish li...@hpcoders.com.au wrote:

 On Wed, May 27, 2015 at 11:22:59PM +1000, Stathis Papaioannou wrote:
 
  But that is not what happens when brain tissue is destroyed, as in a
  stroke. Qualia do actually fade, and entire sensory and
  cognitive modalities fade, leaving others intact. What you are proposing
 is

 Then perhaps we're talking at cross purposes. Do you really think the
 person with a stroke experiences (say) half the quale of red? Is that
 some sort of pink then? Or what?

 I suspect they either experience red, or they don't.

 Or substitute you own quale here if it helps the argument.


A stroke results in objective and subjective changes, such as the inability
to move a limb, understand language or see. These are gross examples of
fading qualia. What you are proposing, as I understand it, is that if the
damaged brain is replaced neuron by neuron, function and qualia would be
restored, but at a certain point when you installed the artificial neuron,
the patient would suddenly lose all consciousness, although to an outside
observer he would seem to be continuing to improve.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Russell Standish
On Thu, May 28, 2015 at 01:23:20PM +1000, Stathis Papaioannou wrote:
 
 A stroke results in objective and subjective changes, such as the inability
 to move a limb, understand language or see. These are gross examples of
 fading qualia. What you are proposing, as I understand it, is that if the
 damaged brain is replaced neuron by neuron, function and qualia would be
 restored, but at a certain point when you installed the artificial neuron,
 the patient would suddenly lose all consciousness, although to an outside
 observer he would seem to be continuing to improve.
 

Rather than focus on the term fading qualia, to which we seem to
have derived a different understanding from the same paper, what
you're saying is that a stroke victim can suffer diminished qualia, ie
qualia no longer existing where once there was some. Eg no longer
being able to smell a banana, or gone colour blind, or similar.

That same person may or may not be aware of the fact that they've lost
their qualia.

The we can consider a nonfunctionalist response to the experiment
where neurons are replaced by functionally equivalent artificial
replacements. A nonfunctionalist ought to treat this like the stroke
case - at some point qualia will disappear. If the person notices the
qualia disappearing, the functionalist would simply say that the
replacement parts are not functionally equivalent, or that the
subsitution level was wrong. This is not an interesting case.

A partial zombie, by defintion will always report that the full qualia
were present, just as a full zombie would.

If, however, the person fails to notice the disappearing qualia, a
functionalist would say that the full set of qualia was being
experienced, whereas a nonfunctionalist would say that the person
might be experiencing a reduced set of qualia, even though e reports
all er faculties being sound. Is this absurd? - no more absurd than the
notion of a full zombie, I would say.

But quales, as continua, which I took to be Chalmers' absurdity,
simply are not implied. Quales exist discretely.

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Terren Suydam
A small correction: I said ... and the unifying thread is the self-model
or ego. This allows for top-down control of attention.   What I mean to
say is This allows for a narrative of top-down control. It is not
actually clear that there is any such thing as top-down control, although
we routinely justify such in the service of maintaining our self models.

On Wed, May 27, 2015 at 11:16 PM, Terren Suydam terren.suy...@gmail.com
wrote:

 Language starts to get in the way here, but what you're suggesting is akin
 to someone who is blind-drunk - they will have no memory of their
 experience, but I think most would say a blind-drunk is conscious.

 But I think the driving scenario is different in that my conscious
 attention is elsewhere... there's competition for the resource of
 attention. I don't really think I'm conscious of the feeling of the floor
 pressing my feet until I pay attention to it.

 My thinking on this is that human consciousness involves a unified/global
 dynamic, and the unifying thread is the self-model or ego. This allows for
 top-down control of attention. When parts of the sensorium (and other
 aspects of the mind) are not involved or included in this global dynamic,
 there is a significant sense in which it does not participate in that human
 consciousness. This is not to say that there is no other consciousness -
 just that it is perhaps of a lower form in a hierarchy of consciousness.

 I would highlight that human consciousness is somewhat unique in that the
 ego - a cultural innovation dependent on the development of language - is
 not present in animals. Without that unifying thread of ego, I suggest that
 animal consciousness is not unlike our dream consciousness, which is an
 arena of awareness when the thread of our ego dissolves. A visual I have is
 that in the waking state, the ego is a bag that encapsulates all the parts
 that make up our psyche. In dreamtime, the drawstring on the bag loosens
 and the parts float out, and get activated according to whatever seemingly
 random processes that constitute dreams.

 In lucid dreams, the ego is restored (i.e. we say to ourselves, *I* *am*
 dreaming) - and we regain consciousness.

 Terren







 On Wed, May 27, 2015 at 10:10 PM, Jason Resch jasonre...@gmail.com
 wrote:

 Are we any less conscious of as it happens, or perhaps our brains are
 simply not forming as many memories of usual/uneventful tasks.

 Jason


 On Wed, May 27, 2015 at 9:06 PM, Terren Suydam terren.suy...@gmail.com
 wrote:

 In the driving scenario it is clear that computation is involved,
 because all sorts of contingent things can be going on (e.g. dynamics of
 driving among other cars), yet this occurs without crossing the threshold
 of consciousness. Relying on some kind of caching mechanism under such
 circumstances would quickly fail one way or another.

 Terren
 On May 27, 2015 7:38 PM, Pierz pier...@gmail.com wrote:



 On Thursday, May 28, 2015 at 6:06:22 AM UTC+10, Brent wrote:

  On 5/26/2015 10:31 PM, Pierz wrote:

   Where I see lookup tables fail is that they seem to operate above
 the probable necessary substation level. (Despite having the same
 inputs/outputs at the higher levels).

But your memoization example still makes a good point - namely
 that some computations can be bypassed in favour of recordings, yet
 presumably this doesn't lead to fading qualia. We don't need anything as
 silly as a gigantic lookup table of all possible responses. We only need 
 to
 acknowledge that we can store the results of recordings of computations
 we've already completed, and that this should not result in any strange
 degradation of consciousness.


 Isn't that what allows me to drive home from work without being
 conscious of it?


 People keep making this point, which is one that I myself made in the
 past - and I believe you argued with me at the time, saying that it's not
 clear that the mechanism for automating brain functions is anything like
 the same as caching the results of a computation. I think that objection is
 actually fair enough. With automated actions it's not clear that the
 computations aren't being carried out any more, just that they no longer
 require conscious attention because the neuronal pathways for those
 computations have become sufficiently reinforced that they no longer
 require concentration. I think this model (automated computation rather
 than cached computation) fits our experience of this phenomenon. Sometimes
 I suspect we're really talking out of our proverbial arses with these
 speculations as we still have so little idea about how the brain works. It
 may be a computer in the sense that it is Turing emulable, but then we talk
 as if it were squishy laptop or something, and that analogy can be
 misleading in many ways. For example, our memories are nothing like RAM.
 They are distributed like a hologram, constructive and fuzzy, whereas
 computer memory is localised, passive and accurate to the bit. I'm probably
 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Pierz


On Thursday, May 28, 2015 at 6:06:22 AM UTC+10, Brent wrote:

  On 5/26/2015 10:31 PM, Pierz wrote:
  
   Where I see lookup tables fail is that they seem to operate above the 
 probable necessary substation level. (Despite having the same 
 inputs/outputs at the higher levels).

But your memoization example still makes a good point - namely that 
 some computations can be bypassed in favour of recordings, yet presumably 
 this doesn't lead to fading qualia. We don't need anything as silly as a 
 gigantic lookup table of all possible responses. We only need to 
 acknowledge that we can store the results of recordings of computations 
 we've already completed, and that this should not result in any strange 
 degradation of consciousness. 


 Isn't that what allows me to drive home from work without being conscious 
 of it?


People keep making this point, which is one that I myself made in the past 
- and I believe you argued with me at the time, saying that it's not clear 
that the mechanism for automating brain functions is anything like the same 
as caching the results of a computation. I think that objection is actually 
fair enough. With automated actions it's not clear that the computations 
aren't being carried out any more, just that they no longer require 
conscious attention because the neuronal pathways for those computations 
have become sufficiently reinforced that they no longer require 
concentration. I think this model (automated computation rather than cached 
computation) fits our experience of this phenomenon. Sometimes I suspect 
we're really talking out of our proverbial arses with these speculations as 
we still have so little idea about how the brain works. It may be a 
computer in the sense that it is Turing emulable, but then we talk as if it 
were squishy laptop or something, and that analogy can be misleading in 
many ways. For example, our memories are nothing like RAM. They are 
distributed like a hologram, constructive and fuzzy, whereas computer 
memory is localised, passive and accurate to the bit. I'm probably guilty 
of the same over-zealous computationalism with my lookup table analogy 
above, but I was thinking more of an AI and the in-principle point that 
cached computation results may be employed at a fine grained level. I would 
continue to insist that it is meaningless to say that a brain that 
employs cached results of computations is a zombie to the extent that it 
does so, because it is meaningless to speak of the when of qualia. (You 
never replied to my argument about poking a recorded Einstein with a stick, 
which I think makes a compelling case for this.) We have to rigorously 
divide the subjective and the objective.


 Brent
  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Terren Suydam
In the driving scenario it is clear that computation is involved, because
all sorts of contingent things can be going on (e.g. dynamics of driving
among other cars), yet this occurs without crossing the threshold of
consciousness. Relying on some kind of caching mechanism under such
circumstances would quickly fail one way or another.

Terren
On May 27, 2015 7:38 PM, Pierz pier...@gmail.com wrote:



 On Thursday, May 28, 2015 at 6:06:22 AM UTC+10, Brent wrote:

  On 5/26/2015 10:31 PM, Pierz wrote:

   Where I see lookup tables fail is that they seem to operate above the
 probable necessary substation level. (Despite having the same
 inputs/outputs at the higher levels).

But your memoization example still makes a good point - namely that
 some computations can be bypassed in favour of recordings, yet presumably
 this doesn't lead to fading qualia. We don't need anything as silly as a
 gigantic lookup table of all possible responses. We only need to
 acknowledge that we can store the results of recordings of computations
 we've already completed, and that this should not result in any strange
 degradation of consciousness.


 Isn't that what allows me to drive home from work without being conscious
 of it?


 People keep making this point, which is one that I myself made in the past
 - and I believe you argued with me at the time, saying that it's not clear
 that the mechanism for automating brain functions is anything like the same
 as caching the results of a computation. I think that objection is actually
 fair enough. With automated actions it's not clear that the computations
 aren't being carried out any more, just that they no longer require
 conscious attention because the neuronal pathways for those computations
 have become sufficiently reinforced that they no longer require
 concentration. I think this model (automated computation rather than cached
 computation) fits our experience of this phenomenon. Sometimes I suspect
 we're really talking out of our proverbial arses with these speculations as
 we still have so little idea about how the brain works. It may be a
 computer in the sense that it is Turing emulable, but then we talk as if it
 were squishy laptop or something, and that analogy can be misleading in
 many ways. For example, our memories are nothing like RAM. They are
 distributed like a hologram, constructive and fuzzy, whereas computer
 memory is localised, passive and accurate to the bit. I'm probably guilty
 of the same over-zealous computationalism with my lookup table analogy
 above, but I was thinking more of an AI and the in-principle point that
 cached computation results may be employed at a fine grained level. I would
 continue to insist that it is meaningless to say that a brain that
 employs cached results of computations is a zombie to the extent that it
 does so, because it is meaningless to speak of the when of qualia. (You
 never replied to my argument about poking a recorded Einstein with a stick,
 which I think makes a compelling case for this.) We have to rigorously
 divide the subjective and the objective.


 Brent

  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread meekerdb

On 5/27/2015 7:06 PM, Terren Suydam wrote:


In the driving scenario it is clear that computation is involved, because all sorts of 
contingent things can be going on (e.g. dynamics of driving among other cars), yet this 
occurs without crossing the threshold of consciousness. Relying on some kind of caching 
mechanism under such circumstances would quickly fail one way or another.


Terren

On May 27, 2015 7:38 PM, Pierz pier...@gmail.com mailto:pier...@gmail.com 
wrote:



On Thursday, May 28, 2015 at 6:06:22 AM UTC+10, Brent wrote:

On 5/26/2015 10:31 PM, Pierz wrote:


Where I see lookup tables fail is that they seem to operate above 
the
probable necessary substation level. (Despite having the same
inputs/outputs at the higher levels).

But your memoization example still makes a good point - namely that some
computations can be bypassed in favour of recordings, yet presumably 
this
doesn't lead to fading qualia. We don't need anything as silly as a 
gigantic
lookup table of all possible responses. We only need to acknowledge 
that we can
store the results of recordings of computations we've already 
completed, and
that this should not result in any strange degradation of consciousness.


Isn't that what allows me to drive home from work without being 
conscious of it?


People keep making this point, which is one that I myself made in the past 
- and I
believe you argued with me at the time, saying that it's not clear that the
mechanism for automating brain functions is anything like the same as 
caching the
results of a computation. I think that objection is actually fair enough. 
With
automated actions it's not clear that the computations aren't being carried 
out any
more, just that they no longer require conscious attention because the 
neuronal
pathways for those computations have become sufficiently reinforced that 
they no
longer require concentration. I think this model (automated computation 
rather than
cached computation) fits our experience of this phenomenon. Sometimes I 
suspect
we're really talking out of our proverbial arses with these speculations as 
we still
have so little idea about how the brain works. It may be a computer in the 
sense
that it is Turing emulable, but then we talk as if it were squishy laptop or
something, and that analogy can be misleading in many ways. For example, our
memories are nothing like RAM. They are distributed like a hologram, 
constructive
and fuzzy, whereas computer memory is localised, passive and accurate to 
the bit.
I'm probably guilty of the same over-zealous computationalism with my 
lookup table
analogy above, but I was thinking more of an AI and the in-principle point 
that
cached computation results may be employed at a fine grained level. I would 
continue
to insist that it is meaningless to say that a brain that employs cached 
results
of computations is a zombie to the extent that it does so, because it is 
meaningless
to speak of the when of qualia. (You never replied to my argument about 
poking a
recorded Einstein with a stick, which I think makes a compelling case for 
this.) We
have to rigorously divide the subjective and the objective.


Brent

-- 
You received this message because you are subscribed to the Google Groups

Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an 
email to
everything-list+unsubscr...@googlegroups.com
mailto:everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com
mailto:everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups Everything 
List group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com 
mailto:everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com 
mailto:everything-list@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Russell Standish
On Wed, May 27, 2015 at 11:22:59PM +1000, Stathis Papaioannou wrote:
 
 But that is not what happens when brain tissue is destroyed, as in a
 stroke. Qualia do actually fade, and entire sensory and
 cognitive modalities fade, leaving others intact. What you are proposing is

Then perhaps we're talking at cross purposes. Do you really think the
person with a stroke experiences (say) half the quale of red? Is that
some sort of pink then? Or what?

I suspect they either experience red, or they don't. 

Or substitute you own quale here if it helps the argument.


-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Jason Resch
On Wed, May 27, 2015 at 2:17 AM, Russell Standish li...@hpcoders.com.au
wrote:

 On Tue, May 26, 2015 at 08:17:39PM -0500, Jason Resch wrote:
   Not at all. My suggestion is that there wouldn't be any partial
   zombies, just normally functioning consciousness, and full zombies,
   with respect to Chalmers fading qualia experiment, due to network
 effects.
  
 
  Doesn't this lead to the problem of suddenly disappearing qualia, which
  Chalmers describes?  What do you think about Chalmers's objections to
  suddenly disappearing qualia?
 
 
  
   Obviously, with functionalism (and computationalism), consciousness is
   retained throughout, and no zombies appear. Chalmers was trying to
   show an absurdity with non-functionalism, and I don't think it works,
   except insofar as full zombies are absurd.
  
 
  Why do you think it fails? Because you can accept the possibility of
  suddenly disappearing qualia?
 
  Jason
 

 It fails, because it relies on the absurdity of partial zombies, which
 I don't think non-functionalism implies.


So are you saying it fails because you think partial zombies are not
absurd, but rather are possible?



 We can remove functionalism from the equation by just considering what
 happens as we remove neurons from a brain. I would seriously expect
 that the qualia will switch off one-by-one as the necessary network
 connections are broken, rather than fading into nothing as Chalmers
 supposes.


This is the suddenly disappearing qualia which is the other of the two
possibilities Chalmers suggests for non-functionalism, is it not?



 Therefore in the fading qualia setup, a non-functionalist ought to
 think the same way - qualia switching off one-by-one until complete
 zombiehood is achieved, well pior to complete replacement of the brain
 by it's functional equivalents.


Do you think we could we interview the person after the fact about at what
point their qualia disappeared?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-27 Thread Russell Standish
On Tue, May 26, 2015 at 08:17:39PM -0500, Jason Resch wrote:
  Not at all. My suggestion is that there wouldn't be any partial
  zombies, just normally functioning consciousness, and full zombies,
  with respect to Chalmers fading qualia experiment, due to network effects.
 
 
 Doesn't this lead to the problem of suddenly disappearing qualia, which
 Chalmers describes?  What do you think about Chalmers's objections to
 suddenly disappearing qualia?
 
 
 
  Obviously, with functionalism (and computationalism), consciousness is
  retained throughout, and no zombies appear. Chalmers was trying to
  show an absurdity with non-functionalism, and I don't think it works,
  except insofar as full zombies are absurd.
 
 
 Why do you think it fails? Because you can accept the possibility of
 suddenly disappearing qualia?
 
 Jason
 

It fails, because it relies on the absurdity of partial zombies, which
I don't think non-functionalism implies.

We can remove functionalism from the equation by just considering what
happens as we remove neurons from a brain. I would seriously expect
that the qualia will switch off one-by-one as the necessary network
connections are broken, rather than fading into nothing as Chalmers
supposes.

Therefore in the fading qualia setup, a non-functionalist ought to
think the same way - qualia switching off one-by-one until complete
zombiehood is achieved, well pior to complete replacement of the brain
by it's functional equivalents.

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-26 Thread Jason Resch
On Sun, May 24, 2015 at 7:15 PM, Russell Standish li...@hpcoders.com.au
wrote:

 On Sun, May 24, 2015 at 07:19:46PM +0200, Bruno Marchal wrote:
 
  On 24 May 2015, at 10:21, Stathis Papaioannou wrote:
 
  
  I can't really see an alternative other than Russel's suggestion
  that the random activity might perfectly sustain consciousness
  until a certain point, then all consciousness would abruptly stop.
 
  That would lead to the non sensical partial zombie. Those who says
  I don't feel any difference.
 

 Not at all. My suggestion is that there wouldn't be any partial
 zombies, just normally functioning consciousness, and full zombies,
 with respect to Chalmers fading qualia experiment, due to network effects.


Doesn't this lead to the problem of suddenly disappearing qualia, which
Chalmers describes?  What do you think about Chalmers's objections to
suddenly disappearing qualia?



 Obviously, with functionalism (and computationalism), consciousness is
 retained throughout, and no zombies appear. Chalmers was trying to
 show an absurdity with non-functionalism, and I don't think it works,
 except insofar as full zombies are absurd.


Why do you think it fails? Because you can accept the possibility of
suddenly disappearing qualia?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-26 Thread Jason Resch
On Mon, May 25, 2015 at 1:46 PM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 25 May 2015, at 02:06, Jason Resch wrote:



 On Sun, May 24, 2015 at 3:52 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 23 May 2015, at 17:07, Jason Resch wrote:



 On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal marc...@ulb.ac.be
 wrote:


 On 19 May 2015, at 15:53, Jason Resch wrote:



 On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou 
 stath...@gmail.com wrote:

 On 19 May 2015 at 14:45, Jason Resch jasonre...@gmail.com wrote:
 
 
  On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou 
 stath...@gmail.com
  wrote:
 
  On 19 May 2015 at 11:02, Jason Resch jasonre...@gmail.com wrote:
 
   I think you're not taking into account the level of the functional
   substitution. Of course functionally equivalent silicon and
 functionally
   equivalent neurons can (under functionalism) both instantiate the
 same
   consciousness. But a calculator computing 2+3 cannot substitute
 for a
   human
   brain computing 2+3 and produce the same consciousness.
 
  In a gradual replacement the substitution must obviously be at a
 level
  sufficient to maintain the function of the whole brain. Sticking a
  calculator in it won't work.
 
   Do you think a Blockhead that was functionally equivalent to you
 (it
   could
   fool all your friends and family in a Turing test scenario into
 thinking
   it
   was intact you) would be conscious in the same way as you?
 
  Not necessarily, just as an actor may not be conscious in the same
 way
  as me. But I suspect the Blockhead would be conscious; the intuition
  that a lookup table can't be conscious is like the intuition that an
  electric circuit can't be conscious.
 
 
  I don't see an equivalency between those intuitions. A lookup table
 has a
  bounded and very low degree of computational complexity: all answers
 to all
  queries are answered in constant time.
 
  While the table itself may have an arbitrarily high information
 content,
  what in the software of the lookup table program is there to
  appreciate/understand/know that information?

 Understanding emerges from the fact that the lookup table is immensely
 large. It could be wrong, but I don't think it is obviously less
 plausible than understanding emerging from a Turing machine made of
 tin cans.



 The lookup table is intelligent or at least offers the appearance of
 intelligence, but it makes the maximum possible advantage of the space-time
 trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff

 The tin-can Turing machine is unbounded in its potential computational
 complexity, there's no reason to be a bio- or silico-chauvinist against it.
 However, by definition, a lookup table has near zero computational
 complexity, no retained state.


 But it is counterfactually correct on a large range spectrum. Of course,
 it has to be infinite to be genuinely counterfactual-correct.


 But the structure of the counterfactuals is identical regardless of the
 inputs and outputs in its lookup table. If you replaced all of its outputs
 with random strings, would that change its consciousness? What if there
 existed a special decoding book, which was a one-time-pad that could decode
 its random answers? Would the existence of this book make it more conscious
 than if this book did not exist? If there is zero information content in
 the outputs returned by the lookup table it might as well return all X
 characters as its response to any query, but then would any program that
 just returns a string of X's be conscious?

 A lookup table might have some primitive conscious, but I think any
 consciousness it has would be more or less the same regardless of the
 number of entries within that lookup table. With more entries, its
 information content grows, but it's capacity to process, interpret, or
 understand that information remains constant.


 You can emulate the brain of Einstein with a (ridiculously  large)
 look-up table, assuming you are ridiculously patient---or we slow down your
 own brain so that you are as slow as einstein.
 Is that incarnation a zombie?

 Again, with comp, all incarnations are zombie, because bodies do not
 think. It is the abstract person which thinks, and in this case Einstein
 will still be defined by the simplest normal computations, which here,
 and only here, have taken the form of that unplausible giant Einstein
 look-up table emulation at the right level.


 That last bit is the part I have difficulty with. How can a a single call
 to a lookup table ever be at the right level.


 Actually, the Turing machine formalism is a type of look-up table: if you
 are scanning input i (big numbers describing all your current sensitive
 entries,  while you are in state
 q_169757243685173427379910054234647572376400064994542424646334345787910190034676754100687.
 (big number describing one of your many possible mental state, then change
 the state into q_888..99 and look what next.

 By construction that 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-26 Thread Pierz


On Wednesday, May 27, 2015 at 11:27:26 AM UTC+10, Jason wrote:



 On Mon, May 25, 2015 at 1:46 PM, Bruno Marchal mar...@ulb.ac.be 
 javascript: wrote:


 On 25 May 2015, at 02:06, Jason Resch wrote:



 On Sun, May 24, 2015 at 3:52 AM, Bruno Marchal mar...@ulb.ac.be 
 javascript: wrote:


 On 23 May 2015, at 17:07, Jason Resch wrote:



 On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal mar...@ulb.ac.be 
 javascript: wrote:


 On 19 May 2015, at 15:53, Jason Resch wrote:



 On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou 
 stat...@gmail.com javascript: wrote:

 On 19 May 2015 at 14:45, Jason Resch jason...@gmail.com javascript: 
 wrote:
 
 
  On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou 
 stat...@gmail.com javascript:
  wrote:
 
  On 19 May 2015 at 11:02, Jason Resch jason...@gmail.com 
 javascript: wrote:
 
   I think you're not taking into account the level of the functional
   substitution. Of course functionally equivalent silicon and 
 functionally
   equivalent neurons can (under functionalism) both instantiate the 
 same
   consciousness. But a calculator computing 2+3 cannot substitute 
 for a
   human
   brain computing 2+3 and produce the same consciousness.
 
  In a gradual replacement the substitution must obviously be at a 
 level
  sufficient to maintain the function of the whole brain. Sticking a
  calculator in it won't work.
 
   Do you think a Blockhead that was functionally equivalent to 
 you (it
   could
   fool all your friends and family in a Turing test scenario into 
 thinking
   it
   was intact you) would be conscious in the same way as you?
 
  Not necessarily, just as an actor may not be conscious in the same 
 way
  as me. But I suspect the Blockhead would be conscious; the intuition
  that a lookup table can't be conscious is like the intuition that an
  electric circuit can't be conscious.
 
 
  I don't see an equivalency between those intuitions. A lookup table 
 has a
  bounded and very low degree of computational complexity: all answers 
 to all
  queries are answered in constant time.
 
  While the table itself may have an arbitrarily high information 
 content,
  what in the software of the lookup table program is there to
  appreciate/understand/know that information?

 Understanding emerges from the fact that the lookup table is immensely
 large. It could be wrong, but I don't think it is obviously less
 plausible than understanding emerging from a Turing machine made of
 tin cans.



 The lookup table is intelligent or at least offers the appearance of 
 intelligence, but it makes the maximum possible advantage of the 
 space-time 
 trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff

 The tin-can Turing machine is unbounded in its potential computational 
 complexity, there's no reason to be a bio- or silico-chauvinist against 
 it. 
 However, by definition, a lookup table has near zero computational 
 complexity, no retained state. 


 But it is counterfactually correct on a large range spectrum. Of 
 course, it has to be infinite to be genuinely counterfactual-correct. 


 But the structure of the counterfactuals is identical regardless of the 
 inputs and outputs in its lookup table. If you replaced all of its outputs 
 with random strings, would that change its consciousness? What if there 
 existed a special decoding book, which was a one-time-pad that could decode 
 its random answers? Would the existence of this book make it more conscious 
 than if this book did not exist? If there is zero information content in 
 the outputs returned by the lookup table it might as well return all X 
 characters as its response to any query, but then would any program that 
 just returns a string of X's be conscious?

 A lookup table might have some primitive conscious, but I think any 
 consciousness it has would be more or less the same regardless of the 
 number of entries within that lookup table. With more entries, its 
 information content grows, but it's capacity to process, interpret, or 
 understand that information remains constant.


 You can emulate the brain of Einstein with a (ridiculously  large) 
 look-up table, assuming you are ridiculously patient---or we slow down your 
 own brain so that you are as slow as einstein.
 Is that incarnation a zombie?

 Again, with comp, all incarnations are zombie, because bodies do not 
 think. It is the abstract person which thinks, and in this case Einstein 
 will still be defined by the simplest normal computations, which here, 
 and only here, have taken the form of that unplausible giant Einstein 
 look-up table emulation at the right level.


 That last bit is the part I have difficulty with. How can a a single call 
 to a lookup table ever be at the right level.


 Actually, the Turing machine formalism is a type of look-up table: if you 
 are scanning input i (big numbers describing all your current sensitive 
 entries,  while you are in state 
 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-26 Thread Bruno Marchal


On 25 May 2015, at 22:49, meekerdb wrote:


On 5/25/2015 5:16 AM, Pierz wrote:



On Monday, May 25, 2015 at 4:58:53 AM UTC+10, Brent wrote:
On 5/24/2015 4:09 AM, Pierz wrote:



On Sunday, May 24, 2015 at 4:47:12 PM UTC+10, Jason wrote:


On Sun, May 24, 2015 at 12:40 AM, Pierz pie...@gmail.com wrote:


On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote:


On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal mar...@ulb.ac.be  
wrote:


On 19 May 2015, at 15:53, Jason Resch wrote:




On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou  
stat...@gmail.com wrote:

On 19 May 2015 at 14:45, Jason Resch jason...@gmail.com wrote:


 On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou  
stat...@gmail.com

 wrote:


 On 19 May 2015 at 11:02, Jason Resch jason...@gmail.com wrote:

  I think you're not taking into account the level of the  
functional
  substitution. Of course functionally equivalent silicon and  
functionally
  equivalent neurons can (under functionalism) both  
instantiate the same
  consciousness. But a calculator computing 2+3 cannot  
substitute for a

  human
  brain computing 2+3 and produce the same consciousness.

 In a gradual replacement the substitution must obviously be at  
a level
 sufficient to maintain the function of the whole brain.  
Sticking a

 calculator in it won't work.

  Do you think a Blockhead that was functionally equivalent  
to you (it

  could
  fool all your friends and family in a Turing test scenario  
into thinking

  it
  was intact you) would be conscious in the same way as you?

 Not necessarily, just as an actor may not be conscious in the  
same way
 as me. But I suspect the Blockhead would be conscious; the  
intuition
 that a lookup table can't be conscious is like the intuition  
that an

 electric circuit can't be conscious.


 I don't see an equivalency between those intuitions. A lookup  
table has a
 bounded and very low degree of computational complexity: all  
answers to all

 queries are answered in constant time.

 While the table itself may have an arbitrarily high information  
content,

 what in the software of the lookup table program is there to
 appreciate/understand/know that information?

Understanding emerges from the fact that the lookup table is  
immensely

large. It could be wrong, but I don't think it is obviously less
plausible than understanding emerging from a Turing machine made of
tin cans.



The lookup table is intelligent or at least offers the appearance  
of intelligence, but it makes the maximum possible advantage of  
the space-time trade off: http://en.wikipedia.org/wiki/Space– 
time_tradeoff


The tin-can Turing machine is unbounded in its potential  
computational complexity, there's no reason to be a bio- or  
silico-chauvinist against it. However, by definition, a lookup  
table has near zero computational complexity, no retained state.


But it is counterfactually correct on a large range spectrum. Of  
course, it has to be infinite to be genuinely counterfactual- 
correct.



But the structure of the counterfactuals is identical regardless  
of the inputs and outputs in its lookup table. If you replaced all  
of its outputs with random strings, would that change its  
consciousness? What if there existed a special decoding book,  
which was a one-time-pad that could decode its random answers?  
Would the existence of this book make it more conscious than if  
this book did not exist? If there is zero information content in  
the outputs returned by the lookup table it might as well return  
all X characters as its response to any query, but then would  
any program that just returns a string of X's be conscious?


I really like this argument, even though I once came up with a  
(bad) attempt to refute it. I wish it received more attention  
because it does cast quite a penetrating light on the issue. What  
you're suggesting is effectively the cache pattern in computer  
programming, where we trade memory resources for computational  
resources. Instead of repeating a resource-intensive computation,  
we store the inputs and outputs for later regurgitation.


How is this different from a movie recording of brain activity  
(which most on the list seem to agree is not conscious)? The  
lookup table is just a really long recording, only we use the  
input to determine to which section of the recording to fast- 
forward/rewind to.


It isn't different to a recording. But here's the thing: when we  
ask if the lookup machine is conscious, we are kind of implicitly  
asking: is it having an experience *now*, while I ask the question  
and see a response. But what does such a question actually even  
mean? If a computation is underway in time when the machine  
responds, then I assume it is having a co-temporal experience. But  
the lookup machine idea forces us to the realization that  
different observers' subjective experiences (the pure qualia)  
can't be mapped to one another in objective time. The 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-26 Thread Bruno Marchal


On 26 May 2015, at 00:55, meekerdb wrote:


On 5/25/2015 11:55 AM, Bruno Marchal wrote:


On 24 May 2015, at 23:51, meekerdb wrote:


On 5/24/2015 11:28 AM, Stathis Papaioannou wrote:



On Monday, May 25, 2015, meekerdb meeke...@verizon.net wrote:
On 5/24/2015 1:52 AM, Bruno Marchal wrote:
Again, with comp, all incarnations are zombie, because bodies  
do not think. It is the abstract person which thinks


But a few thumps on the body and the abstract person won't  
think either.  So far as we have observered *only* bodies think.   
If comp implies the contrary isn't that so much the worse for comp.


In a virtual environment, destroying the body destroys the  
consciousness, but both are actually due to the underlying  
computations.


How can those thumped know it's virtual.  A virtual environment  
with virtual people doing virtual actions seems to make virtual  
virtually meaningless.


It is the difference between life and second life. Reality, and  
relative dreams.


That's the question, can such a difference be meaningful if the  
world is defined by conscious experience. In the examples you give,  
the virtual is distinguished because it is not a rich and complete  
and consistent as real life.


With computationalism (and its consequences) there is a real physical  
bottom, which is the same for all creature. In that sense, comp makes  
physics much more grounded in reality! All creature can test comp or  
simulation if they have enough time and external clues. If we decide  
to keep comp: it is the emulation part which is testable.


Bruno




Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-26 Thread meekerdb

On 5/26/2015 1:29 AM, Bruno Marchal wrote:


On 25 May 2015, at 22:49, meekerdb wrote:


On 5/25/2015 5:16 AM, Pierz wrote:



On Monday, May 25, 2015 at 4:58:53 AM UTC+10, Brent wrote:

On 5/24/2015 4:09 AM, Pierz wrote:



On Sunday, May 24, 2015 at 4:47:12 PM UTC+10, Jason wrote:



On Sun, May 24, 2015 at 12:40 AM, Pierz pie...@gmail.com wrote:



On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote:



On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal 
mar...@ulb.ac.be wrote:


On 19 May 2015, at 15:53, Jason Resch wrote:




On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou
stat...@gmail.com wrote:

On 19 May 2015 at 14:45, Jason Resch 
jason...@gmail.com wrote:


 On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou
stat...@gmail.com
 wrote:


 On 19 May 2015 at 11:02, Jason Resch 
jason...@gmail.com
wrote:

  I think you're not taking into account the level 
of the
functional
  substitution. Of course functionally equivalent 
silicon
and functionally
  equivalent neurons can (under functionalism) both
instantiate the same
  consciousness. But a calculator computing 2+3 
cannot
substitute for a
  human
  brain computing 2+3 and produce the same 
consciousness.

 In a gradual replacement the substitution must 
obviously
be at a level
 sufficient to maintain the function of the whole 
brain.
Sticking a
 calculator in it won't work.

  Do you think a Blockhead that was functionally
equivalent to you (it
  could
  fool all your friends and family in a Turing test
scenario into thinking
  it
  was intact you) would be conscious in the same way 
as you?

 Not necessarily, just as an actor may not be 
conscious in
the same way
 as me. But I suspect the Blockhead would be 
conscious;
the intuition
 that a lookup table can't be conscious is like the
intuition that an
 electric circuit can't be conscious.


 I don't see an equivalency between those intuitions. A
lookup table has a
 bounded and very low degree of computational 
complexity:
all answers to all
 queries are answered in constant time.

 While the table itself may have an arbitrarily high
information content,
 what in the software of the lookup table program is 
there to
 appreciate/understand/know that information?

Understanding emerges from the fact that the lookup 
table is
immensely
large. It could be wrong, but I don't think it is 
obviously less
plausible than understanding emerging from a Turing 
machine
made of
tin cans.



The lookup table is intelligent or at least offers the
appearance of intelligence, but it makes the maximum 
possible
advantage of the space-time trade off:
http://en.wikipedia.org/wiki/Space–time_tradeoff
http://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff

The tin-can Turing machine is unbounded in its potential
computational complexity, there's no reason to be a bio- or
silico-chauvinist against it. However, by definition, a 
lookup
table has near zero computational complexity, no retained 
state.


But it is counterfactually correct on a large range 
spectrum. Of
course, it has to be infinite to be genuinely
counterfactual-correct.


But the structure of the counterfactuals is identical 
regardless of
the inputs and 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-25 Thread Pierz


On Monday, May 25, 2015 at 4:58:53 AM UTC+10, Brent wrote:

  On 5/24/2015 4:09 AM, Pierz wrote:
  


 On Sunday, May 24, 2015 at 4:47:12 PM UTC+10, Jason wrote: 



 On Sun, May 24, 2015 at 12:40 AM, Pierz pie...@gmail.com wrote:



 On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote: 



 On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal mar...@ulb.ac.be 
 wrote:
  

   On 19 May 2015, at 15:53, Jason Resch wrote:

  

 On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou 
 stat...@gmail.com wrote:

  On 19 May 2015 at 14:45, Jason Resch jason...@gmail.com wrote:
 
 
  On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou 
 stat...@gmail.com
  wrote: 

 
  On 19 May 2015 at 11:02, Jason Resch jason...@gmail.com wrote:
 
   I think you're not taking into account the level of the 
 functional
   substitution. Of course functionally equivalent silicon and 
 functionally
   equivalent neurons can (under functionalism) both instantiate 
 the same
   consciousness. But a calculator computing 2+3 cannot substitute 
 for a
   human
   brain computing 2+3 and produce the same consciousness.
 
  In a gradual replacement the substitution must obviously be at a 
 level
  sufficient to maintain the function of the whole brain. Sticking a
  calculator in it won't work.
 
   Do you think a Blockhead that was functionally equivalent to 
 you (it
   could
   fool all your friends and family in a Turing test scenario into 
 thinking
   it
   was intact you) would be conscious in the same way as you?
 
  Not necessarily, just as an actor may not be conscious in the same 
 way
  as me. But I suspect the Blockhead would be conscious; the 
 intuition
  that a lookup table can't be conscious is like the intuition that 
 an
  electric circuit can't be conscious.
 
 
  I don't see an equivalency between those intuitions. A lookup table 
 has a
  bounded and very low degree of computational complexity: all 
 answers to all
  queries are answered in constant time.
 
  While the table itself may have an arbitrarily high information 
 content,
  what in the software of the lookup table program is there to
  appreciate/understand/know that information?

Understanding emerges from the fact that the lookup table is 
 immensely
 large. It could be wrong, but I don't think it is obviously less
 plausible than understanding emerging from a Turing machine made of
 tin cans.
  

 
  The lookup table is intelligent or at least offers the appearance of 
 intelligence, but it makes the maximum possible advantage of the 
 space-time 
 trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff

  The tin-can Turing machine is unbounded in its potential 
 computational complexity, there's no reason to be a bio- or 
 silico-chauvinist against it. However, by definition, a lookup table has 
 near zero computational complexity, no retained state. 


But it is counterfactually correct on a large range spectrum. Of 
 course, it has to be infinite to be genuinely counterfactual-correct. 
  
 
  But the structure of the counterfactuals is identical regardless of 
 the inputs and outputs in its lookup table. If you replaced all of its 
 outputs with random strings, would that change its consciousness? What if 
 there existed a special decoding book, which was a one-time-pad that could 
 decode its random answers? Would the existence of this book make it more 
 conscious than if this book did not exist? If there is zero information 
 content in the outputs returned by the lookup table it might as well 
 return 
 all X characters as its response to any query, but then would any 
 program 
 that just returns a string of X's be conscious?

 I really like this argument, even though I once came up with a 
 (bad) attempt to refute it. I wish it received more attention because it 
 does cast quite a penetrating light on the issue. What you're suggesting is 
 effectively the cache pattern in computer programming, where we trade 
 memory resources for computational resources. Instead of repeating a 
 resource-intensive computation, we store the inputs and outputs for later 
 regurgitation. 
  

  How is this different from a movie recording of brain activity (which 
 most on the list seem to agree is not conscious)? The lookup table is just 
 a really long recording, only we use the input to determine to which 
 section of the recording to fast-forward/rewind to.

It isn't different to a recording. But here's the thing: when we ask 
 if the lookup machine is conscious, we are kind of implicitly asking: is it 
 having an experience *now*, while I ask the question and see a response. 
 But what does such a question actually even mean? If a computation is 
 underway in time when the machine responds, then I assume it is having a 
 co-temporal experience. But the lookup machine idea forces us to the 
 realization that different observers' subjective experiences (the pure 
 qualia) can't be mapped to one another in objective time. 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-25 Thread Stathis Papaioannou
On Monday, May 25, 2015, meekerdb meeke...@verizon.net wrote:

 On 5/24/2015 4:27 PM, Stathis Papaioannou wrote:

 On 25 May 2015 at 07:51, meekerdb meeke...@verizon.net wrote:

 On 5/24/2015 11:28 AM, Stathis Papaioannou wrote:

  In a virtual environment, destroying the body destroys the
 consciousness,
 but both are actually due to the underlying computations.


 How can those thumped know it's virtual.  A virtual environment with
 virtual
 people doing virtual actions seems to make virtual virtually
 meaningless.

 The people won't necessarily know, but they could know, as it could be
 revealed by the programmers or deduced from some programming glitch
 (as in the film The Thirteenth Floor). But I don't think it makes a
 difference if they know or not. The answer to the obvious objection
 that if you destroy the brain you destroy consciousness, so
 consciousness can't reside in Platonia, is that both the brain and
 consciousness could reside in Platonia.


 Where ever they reside though you have to explain how damaging the brain
 changes consciousness.  And if you can explain this relation in Platonia
 why won't the same relation exist in Physicalia.


It could happen in both, but it is not evidence against a simulated reality
to say that consciousness seems to be dependent on the apparently physical
brain.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-25 Thread Bruno Marchal


On 25 May 2015, at 02:06, Jason Resch wrote:




On Sun, May 24, 2015 at 3:52 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 23 May 2015, at 17:07, Jason Resch wrote:




On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 19 May 2015, at 15:53, Jason Resch wrote:




On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou stath...@gmail.com 
 wrote:

On 19 May 2015 at 14:45, Jason Resch jasonre...@gmail.com wrote:


 On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou stath...@gmail.com 


 wrote:

 On 19 May 2015 at 11:02, Jason Resch jasonre...@gmail.com  
wrote:


  I think you're not taking into account the level of the  
functional
  substitution. Of course functionally equivalent silicon and  
functionally
  equivalent neurons can (under functionalism) both instantiate  
the same
  consciousness. But a calculator computing 2+3 cannot  
substitute for a

  human
  brain computing 2+3 and produce the same consciousness.

 In a gradual replacement the substitution must obviously be at  
a level
 sufficient to maintain the function of the whole brain.  
Sticking a

 calculator in it won't work.

  Do you think a Blockhead that was functionally equivalent  
to you (it

  could
  fool all your friends and family in a Turing test scenario  
into thinking

  it
  was intact you) would be conscious in the same way as you?

 Not necessarily, just as an actor may not be conscious in the  
same way
 as me. But I suspect the Blockhead would be conscious; the  
intuition
 that a lookup table can't be conscious is like the intuition  
that an

 electric circuit can't be conscious.


 I don't see an equivalency between those intuitions. A lookup  
table has a
 bounded and very low degree of computational complexity: all  
answers to all

 queries are answered in constant time.

 While the table itself may have an arbitrarily high information  
content,

 what in the software of the lookup table program is there to
 appreciate/understand/know that information?

Understanding emerges from the fact that the lookup table is  
immensely

large. It could be wrong, but I don't think it is obviously less
plausible than understanding emerging from a Turing machine made of
tin cans.



The lookup table is intelligent or at least offers the appearance  
of intelligence, but it makes the maximum possible advantage of  
the space-time trade off: http://en.wikipedia.org/wiki/Space– 
time_tradeoff


The tin-can Turing machine is unbounded in its potential  
computational complexity, there's no reason to be a bio- or silico- 
chauvinist against it. However, by definition, a lookup table has  
near zero computational complexity, no retained state.


But it is counterfactually correct on a large range spectrum. Of  
course, it has to be infinite to be genuinely counterfactual-correct.



But the structure of the counterfactuals is identical regardless of  
the inputs and outputs in its lookup table. If you replaced all of  
its outputs with random strings, would that change its  
consciousness? What if there existed a special decoding book, which  
was a one-time-pad that could decode its random answers? Would the  
existence of this book make it more conscious than if this book did  
not exist? If there is zero information content in the outputs  
returned by the lookup table it might as well return all X  
characters as its response to any query, but then would any program  
that just returns a string of X's be conscious?


A lookup table might have some primitive conscious, but I think any  
consciousness it has would be more or less the same regardless of  
the number of entries within that lookup table. With more entries,  
its information content grows, but it's capacity to process,  
interpret, or understand that information remains constant.


You can emulate the brain of Einstein with a (ridiculously  large)  
look-up table, assuming you are ridiculously patient---or we slow  
down your own brain so that you are as slow as einstein.

Is that incarnation a zombie?

Again, with comp, all incarnations are zombie, because bodies do  
not think. It is the abstract person which thinks, and in this case  
Einstein will still be defined by the simplest normal  
computations, which here, and only here, have taken the form of that  
unplausible giant Einstein look-up table emulation at the right  
level.



That last bit is the part I have difficulty with. How can a a single  
call to a lookup table ever be at the right level.


Actually, the Turing machine formalism is a type of look-up table: if  
you are scanning input i (big numbers describing all your current  
sensitive entries,  while you are in state  
q_169757243685173427379910054234647572376400064994542424646334345787910190034 
676754100687. (big number describing one of your many possible  
mental state, then change the state into q_888..99 and look what next.


By construction that system behaves self-referentially correctly, so  
it 

Re: Reconciling Random Neuron Firings and Fading Qualia errata

2015-05-25 Thread John Mikes
Brent:
would you include in your 'nomologics' all that stuff beyond our present
knowledge as well? Same with causal, but in reverse.
Probabilities depend on the borders we observe: change them and the results
change as well. The same as statistical, with added functionality.

Sorry for my haphazardous formulation - I could have done better.

John Mikes

On Sat, May 23, 2015 at 3:07 AM, meekerdb meeke...@verizon.net wrote:

  OOPS. I meant ...randomly means NOT in accordance...

 On 5/22/2015 11:15 PM, meekerdb wrote:


 And note that in this context randomly means in accordance with
 nomologically determined causal probabilities.  It doesn't necessarily mean
 deterministically.

 Brent


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-25 Thread Bruno Marchal


On 24 May 2015, at 23:51, meekerdb wrote:


On 5/24/2015 11:28 AM, Stathis Papaioannou wrote:



On Monday, May 25, 2015, meekerdb meeke...@verizon.net wrote:
On 5/24/2015 1:52 AM, Bruno Marchal wrote:
Again, with comp, all incarnations are zombie, because bodies do  
not think. It is the abstract person which thinks


But a few thumps on the body and the abstract person won't think  
either.  So far as we have observered *only* bodies think.  If comp  
implies the contrary isn't that so much the worse for comp.


In a virtual environment, destroying the body destroys the  
consciousness, but both are actually due to the underlying  
computations.


How can those thumped know it's virtual.  A virtual environment with  
virtual people doing virtual actions seems to make virtual 
virtually meaningless.


It is the difference between life and second life. Reality, and  
relative dreams.




Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-25 Thread meekerdb

On 5/25/2015 5:16 AM, Pierz wrote:



On Monday, May 25, 2015 at 4:58:53 AM UTC+10, Brent wrote:

On 5/24/2015 4:09 AM, Pierz wrote:



On Sunday, May 24, 2015 at 4:47:12 PM UTC+10, Jason wrote:



On Sun, May 24, 2015 at 12:40 AM, Pierz pie...@gmail.com wrote:



On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote:



On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal 
mar...@ulb.ac.be wrote:


On 19 May 2015, at 15:53, Jason Resch wrote:




On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou
stat...@gmail.com wrote:

On 19 May 2015 at 14:45, Jason Resch 
jason...@gmail.com wrote:


 On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou
stat...@gmail.com
 wrote:


 On 19 May 2015 at 11:02, Jason Resch 
jason...@gmail.com
wrote:

  I think you're not taking into account the level 
of the
functional
  substitution. Of course functionally equivalent 
silicon
and functionally
  equivalent neurons can (under functionalism) both
instantiate the same
  consciousness. But a calculator computing 2+3 
cannot
substitute for a
  human
  brain computing 2+3 and produce the same 
consciousness.

 In a gradual replacement the substitution must 
obviously be
at a level
 sufficient to maintain the function of the whole 
brain.
Sticking a
 calculator in it won't work.

  Do you think a Blockhead that was functionally
equivalent to you (it
  could
  fool all your friends and family in a Turing test
scenario into thinking
  it
  was intact you) would be conscious in the same way 
as you?

 Not necessarily, just as an actor may not be 
conscious in
the same way
 as me. But I suspect the Blockhead would be 
conscious; the
intuition
 that a lookup table can't be conscious is like the
intuition that an
 electric circuit can't be conscious.


 I don't see an equivalency between those intuitions. A
lookup table has a
 bounded and very low degree of computational 
complexity: all
answers to all
 queries are answered in constant time.

 While the table itself may have an arbitrarily high
information content,
 what in the software of the lookup table program is 
there to
 appreciate/understand/know that information?

Understanding emerges from the fact that the lookup 
table is
immensely
large. It could be wrong, but I don't think it is 
obviously less
plausible than understanding emerging from a Turing 
machine
made of
tin cans.



The lookup table is intelligent or at least offers the 
appearance
of intelligence, but it makes the maximum possible 
advantage of
the space-time trade off:
http://en.wikipedia.org/wiki/Space–time_tradeoff
http://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff

The tin-can Turing machine is unbounded in its potential
computational complexity, there's no reason to be a bio- or
silico-chauvinist against it. However, by definition, a 
lookup
table has near zero computational complexity, no retained 
state.


But it is counterfactually correct on a large range 
spectrum. Of
course, it has to be infinite to be genuinely 
counterfactual-correct.


But the structure of the counterfactuals is identical 
regardless of the
inputs and outputs in its lookup table. If you replaced all of 
its
outputs with random strings, would that 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-25 Thread meekerdb

On 5/25/2015 10:48 AM, Stathis Papaioannou wrote:



On Monday, May 25, 2015, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net 
wrote:


On 5/24/2015 4:27 PM, Stathis Papaioannou wrote:

On 25 May 2015 at 07:51, meekerdb meeke...@verizon.net wrote:

On 5/24/2015 11:28 AM, Stathis Papaioannou wrote:

In a virtual environment, destroying the body destroys the 
consciousness,
but both are actually due to the underlying computations.


How can those thumped know it's virtual.  A virtual environment 
with virtual
people doing virtual actions seems to make virtual virtually 
meaningless.

The people won't necessarily know, but they could know, as it could be
revealed by the programmers or deduced from some programming glitch
(as in the film The Thirteenth Floor). But I don't think it makes a
difference if they know or not. The answer to the obvious objection
that if you destroy the brain you destroy consciousness, so
consciousness can't reside in Platonia, is that both the brain and
consciousness could reside in Platonia.


Where ever they reside though you have to explain how damaging the brain 
changes
consciousness.  And if you can explain this relation in Platonia why won't 
the same
relation exist in Physicalia.


It could happen in both, but it is not evidence against a simulated reality to say that 
consciousness seems to be dependent on the apparently physical brain.


A reality is only simulated relative to some more real reality - so I'm not sure what 
the point of referring to a simulated reality is.  The question is whether the move to 
Platonia really solves the mind-body problem or just rephrases it as the body-mind problem.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-25 Thread meekerdb

On 5/25/2015 11:55 AM, Bruno Marchal wrote:


On 24 May 2015, at 23:51, meekerdb wrote:


On 5/24/2015 11:28 AM, Stathis Papaioannou wrote:



On Monday, May 25, 2015, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net 
wrote:


On 5/24/2015 1:52 AM, Bruno Marchal wrote:

Again, with comp, all incarnations are zombie, because bodies do not 
think. It
is the abstract person which thinks


But a few thumps on the body and the abstract person won't think either.  
So far
as we have observered *only* bodies think.  If comp implies the contrary 
isn't
that so much the worse for comp.


In a virtual environment, destroying the body destroys the consciousness, but both are 
actually due to the underlying computations.


How can those thumped know it's virtual.  A virtual environment with virtual people 
doing virtual actions seems to make virtual virtually meaningless.


It is the difference between life and second life. Reality, and relative dreams.


That's the question, can such a difference be meaningful if the world is defined by 
conscious experience. In the examples you give, the virtual is distinguished because it is 
not a rich and complete and consistent as real life.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-25 Thread Pierz


On Tuesday, May 26, 2015 at 6:49:51 AM UTC+10, Brent wrote:

  On 5/25/2015 5:16 AM, Pierz wrote:
  


 On Monday, May 25, 2015 at 4:58:53 AM UTC+10, Brent wrote: 

  On 5/24/2015 4:09 AM, Pierz wrote:
  


 On Sunday, May 24, 2015 at 4:47:12 PM UTC+10, Jason wrote: 



 On Sun, May 24, 2015 at 12:40 AM, Pierz pie...@gmail.com wrote:



 On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote: 



 On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal mar...@ulb.ac.be 
 wrote:
  

   On 19 May 2015, at 15:53, Jason Resch wrote:

  

 On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou 
 stat...@gmail.com wrote:

  On 19 May 2015 at 14:45, Jason Resch jason...@gmail.com wrote:
 
 
  On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou 
 stat...@gmail.com
  wrote: 

 
  On 19 May 2015 at 11:02, Jason Resch jason...@gmail.com wrote:
 
   I think you're not taking into account the level of the 
 functional
   substitution. Of course functionally equivalent silicon and 
 functionally
   equivalent neurons can (under functionalism) both instantiate 
 the same
   consciousness. But a calculator computing 2+3 cannot substitute 
 for a
   human
   brain computing 2+3 and produce the same consciousness.
 
  In a gradual replacement the substitution must obviously be at a 
 level
  sufficient to maintain the function of the whole brain. Sticking a
  calculator in it won't work.
 
   Do you think a Blockhead that was functionally equivalent to 
 you (it
   could
   fool all your friends and family in a Turing test scenario into 
 thinking
   it
   was intact you) would be conscious in the same way as you?
 
  Not necessarily, just as an actor may not be conscious in the 
 same way
  as me. But I suspect the Blockhead would be conscious; the 
 intuition
  that a lookup table can't be conscious is like the intuition that 
 an
  electric circuit can't be conscious.
 
 
  I don't see an equivalency between those intuitions. A lookup 
 table has a
  bounded and very low degree of computational complexity: all 
 answers to all
  queries are answered in constant time.
 
  While the table itself may have an arbitrarily high information 
 content,
  what in the software of the lookup table program is there to
  appreciate/understand/know that information?

Understanding emerges from the fact that the lookup table is 
 immensely
 large. It could be wrong, but I don't think it is obviously less
 plausible than understanding emerging from a Turing machine made of
 tin cans.
  

 
  The lookup table is intelligent or at least offers the appearance 
 of intelligence, but it makes the maximum possible advantage of the 
 space-time trade off: 
 http://en.wikipedia.org/wiki/Space–time_tradeoff

  The tin-can Turing machine is unbounded in its potential 
 computational complexity, there's no reason to be a bio- or 
 silico-chauvinist against it. However, by definition, a lookup table has 
 near zero computational complexity, no retained state. 


But it is counterfactually correct on a large range spectrum. Of 
 course, it has to be infinite to be genuinely counterfactual-correct. 
  
 
  But the structure of the counterfactuals is identical regardless of 
 the inputs and outputs in its lookup table. If you replaced all of its 
 outputs with random strings, would that change its consciousness? What if 
 there existed a special decoding book, which was a one-time-pad that 
 could 
 decode its random answers? Would the existence of this book make it more 
 conscious than if this book did not exist? If there is zero information 
 content in the outputs returned by the lookup table it might as well 
 return 
 all X characters as its response to any query, but then would any 
 program 
 that just returns a string of X's be conscious?

 I really like this argument, even though I once came up with a 
 (bad) attempt to refute it. I wish it received more attention because it 
 does cast quite a penetrating light on the issue. What you're suggesting 
 is 
 effectively the cache pattern in computer programming, where we trade 
 memory resources for computational resources. Instead of repeating a 
 resource-intensive computation, we store the inputs and outputs for later 
 regurgitation. 
  

  How is this different from a movie recording of brain activity (which 
 most on the list seem to agree is not conscious)? The lookup table is just 
 a really long recording, only we use the input to determine to which 
 section of the recording to fast-forward/rewind to.

It isn't different to a recording. But here's the thing: when we ask 
 if the lookup machine is conscious, we are kind of implicitly asking: is it 
 having an experience *now*, while I ask the question and see a response. 
 But what does such a question actually even mean? If a computation is 
 underway in time when the machine responds, then I assume it is having a 
 co-temporal experience. But the lookup machine idea forces us to the 
 realization that 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Pierz


On Sunday, May 24, 2015 at 4:47:12 PM UTC+10, Jason wrote:



 On Sun, May 24, 2015 at 12:40 AM, Pierz pie...@gmail.com javascript: 
 wrote:



 On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote:



 On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal mar...@ulb.ac.be 
 wrote:


 On 19 May 2015, at 15:53, Jason Resch wrote:



 On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou 
 stat...@gmail.com wrote:

 On 19 May 2015 at 14:45, Jason Resch jason...@gmail.com wrote:
 
 
  On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou 
 stat...@gmail.com
  wrote:

 
  On 19 May 2015 at 11:02, Jason Resch jason...@gmail.com wrote:
 
   I think you're not taking into account the level of the functional
   substitution. Of course functionally equivalent silicon and 
 functionally
   equivalent neurons can (under functionalism) both instantiate the 
 same
   consciousness. But a calculator computing 2+3 cannot substitute 
 for a
   human
   brain computing 2+3 and produce the same consciousness.
 
  In a gradual replacement the substitution must obviously be at a 
 level
  sufficient to maintain the function of the whole brain. Sticking a
  calculator in it won't work.
 
   Do you think a Blockhead that was functionally equivalent to 
 you (it
   could
   fool all your friends and family in a Turing test scenario into 
 thinking
   it
   was intact you) would be conscious in the same way as you?
 
  Not necessarily, just as an actor may not be conscious in the same 
 way
  as me. But I suspect the Blockhead would be conscious; the intuition
  that a lookup table can't be conscious is like the intuition that an
  electric circuit can't be conscious.
 
 
  I don't see an equivalency between those intuitions. A lookup table 
 has a
  bounded and very low degree of computational complexity: all answers 
 to all
  queries are answered in constant time.
 
  While the table itself may have an arbitrarily high information 
 content,
  what in the software of the lookup table program is there to
  appreciate/understand/know that information?

 Understanding emerges from the fact that the lookup table is immensely
 large. It could be wrong, but I don't think it is obviously less
 plausible than understanding emerging from a Turing machine made of
 tin cans.



 The lookup table is intelligent or at least offers the appearance of 
 intelligence, but it makes the maximum possible advantage of the 
 space-time 
 trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff

 The tin-can Turing machine is unbounded in its potential computational 
 complexity, there's no reason to be a bio- or silico-chauvinist against 
 it. 
 However, by definition, a lookup table has near zero computational 
 complexity, no retained state. 


 But it is counterfactually correct on a large range spectrum. Of 
 course, it has to be infinite to be genuinely counterfactual-correct. 


 But the structure of the counterfactuals is identical regardless of the 
 inputs and outputs in its lookup table. If you replaced all of its outputs 
 with random strings, would that change its consciousness? What if there 
 existed a special decoding book, which was a one-time-pad that could decode 
 its random answers? Would the existence of this book make it more conscious 
 than if this book did not exist? If there is zero information content in 
 the outputs returned by the lookup table it might as well return all X 
 characters as its response to any query, but then would any program that 
 just returns a string of X's be conscious?

 I really like this argument, even though I once came up with a (bad) 
 attempt to refute it. I wish it received more attention because it does 
 cast quite a penetrating light on the issue. What you're suggesting is 
 effectively the cache pattern in computer programming, where we trade 
 memory resources for computational resources. Instead of repeating a 
 resource-intensive computation, we store the inputs and outputs for later 
 regurgitation. 


 How is this different from a movie recording of brain activity (which most 
 on the list seem to agree is not conscious)? The lookup table is just a 
 really long recording, only we use the input to determine to which section 
 of the recording to fast-forward/rewind to.

 It isn't different to a recording. But here's the thing: when we ask if 
the lookup machine is conscious, we are kind of implicitly asking: is it 
having an experience *now*, while I ask the question and see a response. 
But what does such a question actually even mean? If a computation is 
underway in time when the machine responds, then I assume it is having a 
co-temporal experience. But the lookup machine idea forces us to the 
realization that different observers' subjective experiences (the pure 
qualia) can't be mapped to one another in objective time. The experiences 
themselves are pure abstractions and don't occur in time and space. How 
could we ever measure the time at which a quale occurs? Sure we 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread LizR
On 24 May 2015 at 17:40, Pierz pier...@gmail.com wrote:

 I really like this argument, even though I once came up with a (bad)
 attempt to refute it. I wish it received more attention because it does
 cast quite a penetrating light on the issue. What you're suggesting is
 effectively the cache pattern in computer programming, where we trade
 memory resources for computational resources. Instead of repeating a
 resource-intensive computation, we store the inputs and outputs for later
 regurgitation. The cached results 'store' intelligence in an analogous way
 to the storage of energy as potential energy.


Another valid comparison, in my opinion, is the storage of intelligence
in DNA. Instinctive behaviour coded in DNA is effectively substituting a
lookup table for work-it-out-on-the-fly type intelligence.


 We effectively flatten out time (the computational process) into the
 spatial dimension (memory). The cache pattern does not allow us to cheat
 the law that intelligent work must be done in order to produce intelligent
 results, it merely allows us to do that work at a time that suits us. The
 intelligence has been transferred into the spatial relationships built into
 the table, intelligent relationships we can only discover by doing the
 computations. The lookup table is useless without its index.


It's also akin to the MGA, where subsequent re-running of the original
computation fails to add anything to it (like more consciousness).


 So what your thought experiment points out is pretty fascinating: that
 intelligence can be manifested spatially as well as temporally, contrary to
 our common-sense intuition, and that the intelligence of a machine does not
 have to be in real time. That actually supports the MGA if anything -
 because computations are abstractions outside of time and space. We should
 not forget that the memory resources required to duplicate any kind of
 intelligent computer would be absolutely enormous, and the lookup table,
 although structurally simple, would embody just a vast amount of
 computational intelligence.

 I think you anticipated my comment above but I'm not 100% sure if we're
saying the same thing so I'll let it stand, just in case we aren't :-)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Pierz


On Sunday, May 24, 2015 at 8:18:41 PM UTC+10, Liz R wrote:

 On 24 May 2015 at 17:40, Pierz pie...@gmail.com javascript: wrote:

 I really like this argument, even though I once came up with a (bad) 
 attempt to refute it. I wish it received more attention because it does 
 cast quite a penetrating light on the issue. What you're suggesting is 
 effectively the cache pattern in computer programming, where we trade 
 memory resources for computational resources. Instead of repeating a 
 resource-intensive computation, we store the inputs and outputs for later 
 regurgitation. The cached results 'store' intelligence in an analogous way 
 to the storage of energy as potential energy. 


 Another valid comparison, in my opinion, is the storage of intelligence 
 in DNA. Instinctive behaviour coded in DNA is effectively substituting a 
 lookup table for work-it-out-on-the-fly type intelligence.
  

Yes, I nearly said that myself - the intelligence encoded in the organism 
by evolution.
 

 We effectively flatten out time (the computational process) into the 
 spatial dimension (memory). The cache pattern does not allow us to cheat 
 the law that intelligent work must be done in order to produce intelligent 
 results, it merely allows us to do that work at a time that suits us. The 
 intelligence has been transferred into the spatial relationships built into 
 the table, intelligent relationships we can only discover by doing the 
 computations. The lookup table is useless without its index. 


 It's also akin to the MGA, where subsequent re-running of the original 
 computation fails to add anything to it (like more consciousness).
  

 So what your thought experiment points out is pretty fascinating: that 
 intelligence can be manifested spatially as well as temporally, contrary to 
 our common-sense intuition, and that the intelligence of a machine does not 
 have to be in real time. That actually supports the MGA if anything - 
 because computations are abstractions outside of time and space. We should 
 not forget that the memory resources required to duplicate any kind of 
 intelligent computer would be absolutely enormous, and the lookup table, 
 although structurally simple, would embody just a vast amount of 
 computational intelligence. 

 I think you anticipated my comment above but I'm not 100% sure if we're 
 saying the same thing so I'll let it stand, just in case we aren't :-)


We are :) 
 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Bruno Marchal


On 23 May 2015, at 08:06, Pierz wrote:




On Saturday, May 23, 2015 at 2:14:07 AM UTC+10, Bruno Marchal wrote:

On 22 May 2015, at 10:34, Stathis Papaioannou wrote:




On Friday, May 22, 2015, Bruno Marchal marc...@ulb.ac.be wrote:

On 21 May 2015, at 01:53, Stathis Papaioannou wrote:




On Wednesday, May 20, 2015, Jason Resch jasonre...@gmail.com  
wrote:

snip
Partial zombies are absurd because they make the concept of  
consciousness meaningless.


OK.


Random neurons, separated neurons and platonic computations  
sustaining consciousness are merely weird, not absurd.


Not OK. Random neurone, like the movie, simply does not compute.  
They only mimic the contingent (and logically unrelated) physical  
activity related to a special implementation of a computation. If  
you change the initial computer, that physical activity could mimic  
another computation. Or, like Maudlin showed: you can change the  
physical activity arbitrarily, and still mimic the initial  
computation: so the relation between computation, and the physical  
activity of the computer running that computation is accidental,  
nor logical.


Platonic computation, on the contrary, does compute (in the  
original sense of computation).


You're assuming not only that computationalism is true, but that  
it's exclusively true.


That is part of the definition, and that is why I add often that we  
have to say yes to the doctor, in virtue of surviving qua  
computatio.  I have often try to explain that someone can believe  
in both Church thesis, and say yes to the doctor, but still believe  
in this not for the reason that the artficial brain will run the  
relevant computation, but because he believes in the Virgin Mary,  
and he believes she is good and compensionate, so that if the  
artificial brain is good enough she will save your soul, and  
reinstall it in the digital physical brain. That is *not*  
computationalism. It is computationalism + magic.





Go back several steps and consider why we think computationalism  
might be true in the first place. The usual start is that computers  
can behave intelligently and substitute for processes in the brain.


OK.



So if something else can behave intelligently and substitute for  
processes in the brain, it's not absurd to consider that it might  
be conscious. It's begging the question to say that it can't be  
conscious because it isn't a computation.



The movie and the lucky random brain are different in that respect.

The movie doesn't behave like if it was conscious. I can tell the  
movie that mustard is a mineral, or an animal, the movie does not  
react. it fails at the Turing tests, and the zombie test. There is  
neither computations, nor intelligent behaviors, relevant with the  
consciousness associated' to the boolean circuit.


The inimagibly lucky random brain, on the contrary,  does behave  
in a way making a person acting like a p-zombie or a conscious  
individual. We don't see the difference with a conscious being, by  
definition/construction.


Well, if a random event mimics by chance a computation, that means  
at the least that the computation exists (in arithmetic), and I  
suggest to associate consciousness to it.


I suspect you're wrong. In the case of the recording, the movie  
might still pass the Turing test if we invert the flukey coincidence  
and allow the possibility the questioner might ask questions that  
exactly correspond to the responses that the film happens to output.  
I remember watching a Blues Brothers midnight screening once, and  
all the cult fans who'd go every week would yell things out at  
certain points in the action and the actors would appear to respond  
to their shouted questions and interjections. In this case the  
illusion of conversation was constructed, but it could occur by  
chance. Would the recording then be conscious? In both the random  
and fixed response cases, there is no actual link other than  
coincidence between inputs and outputs, and this is the key. The  
random brain is not responding or processing inputs at all, any more  
than the film is. So the key to these types of thought experiments  
is whether intelligence and consciousness are functions of the  
responsive relationship between inputs and outputs, or merely the  
appearance of responsiveness. I think we have to say that actual  
responsiveness is required, and therefore fearless commit to the  
idea that a zombie is indeed 'possible', if the infinitely unlikely  
is possible! I think that arguments based on 'infinite  
improbability' (white rabbits) must surely be the weakest of all  
possible arguments in philosophy, and should really just be  
dismissed out of hand. Just as Deutsch argues that there are no  
worlds in the multiverse where magic works, only some worlds where  
it has worked and will never work again, we can admit the  
possibility of being fooled into believing that a randomly jerking  
zombie is conscious, or a 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Bruno Marchal


On 23 May 2015, at 17:07, Jason Resch wrote:




On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 19 May 2015, at 15:53, Jason Resch wrote:




On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou stath...@gmail.com 
 wrote:

On 19 May 2015 at 14:45, Jason Resch jasonre...@gmail.com wrote:


 On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou stath...@gmail.com 


 wrote:

 On 19 May 2015 at 11:02, Jason Resch jasonre...@gmail.com wrote:

  I think you're not taking into account the level of the  
functional
  substitution. Of course functionally equivalent silicon and  
functionally
  equivalent neurons can (under functionalism) both instantiate  
the same
  consciousness. But a calculator computing 2+3 cannot  
substitute for a

  human
  brain computing 2+3 and produce the same consciousness.

 In a gradual replacement the substitution must obviously be at a  
level

 sufficient to maintain the function of the whole brain. Sticking a
 calculator in it won't work.

  Do you think a Blockhead that was functionally equivalent to  
you (it

  could
  fool all your friends and family in a Turing test scenario  
into thinking

  it
  was intact you) would be conscious in the same way as you?

 Not necessarily, just as an actor may not be conscious in the  
same way
 as me. But I suspect the Blockhead would be conscious; the  
intuition
 that a lookup table can't be conscious is like the intuition  
that an

 electric circuit can't be conscious.


 I don't see an equivalency between those intuitions. A lookup  
table has a
 bounded and very low degree of computational complexity: all  
answers to all

 queries are answered in constant time.

 While the table itself may have an arbitrarily high information  
content,

 what in the software of the lookup table program is there to
 appreciate/understand/know that information?

Understanding emerges from the fact that the lookup table is  
immensely

large. It could be wrong, but I don't think it is obviously less
plausible than understanding emerging from a Turing machine made of
tin cans.



The lookup table is intelligent or at least offers the appearance  
of intelligence, but it makes the maximum possible advantage of the  
space-time trade off: http://en.wikipedia.org/wiki/Space– 
time_tradeoff


The tin-can Turing machine is unbounded in its potential  
computational complexity, there's no reason to be a bio- or silico- 
chauvinist against it. However, by definition, a lookup table has  
near zero computational complexity, no retained state.


But it is counterfactually correct on a large range spectrum. Of  
course, it has to be infinite to be genuinely counterfactual-correct.



But the structure of the counterfactuals is identical regardless of  
the inputs and outputs in its lookup table. If you replaced all of  
its outputs with random strings, would that change its  
consciousness? What if there existed a special decoding book, which  
was a one-time-pad that could decode its random answers? Would the  
existence of this book make it more conscious than if this book did  
not exist? If there is zero information content in the outputs  
returned by the lookup table it might as well return all X  
characters as its response to any query, but then would any program  
that just returns a string of X's be conscious?


A lookup table might have some primitive conscious, but I think any  
consciousness it has would be more or less the same regardless of  
the number of entries within that lookup table. With more entries,  
its information content grows, but it's capacity to process,  
interpret, or understand that information remains constant.


You can emulate the brain of Einstein with a (ridiculously  large)  
look-up table, assuming you are ridiculously patient---or we slow down  
your own brain so that you are as slow as einstein.

Is that incarnation a zombie?

Again, with comp, all incarnations are zombie, because bodies do not  
think. It is the abstract person which thinks, and in this case  
Einstein will still be defined by the simplest normal  
computations, which here, and only here, have taken the form of that  
unplausible giant Einstein look-up table emulation at the right level.








Does an ant trained to perform the look table's operation become  
more aware when placed in a vast library than when placed on a  
small bookshelf, to perform the identical function?


Are you not doing the Searle's level confusion?

I see the close parallel, but I hope not. The input to the ant when  
interpreted as a binary string is a number, that tells the ant how  
many pages to walk past to get to the page containing the answer,  
where the ant stops the paper is read. I don't see how this system  
consisting of the ant, and the library, is conscious. The system is  
intelligent, in that it provides meaningful answers to queries, but  
it processes no information besides evaluating the magnitude of an  
input 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Pierz


On Sunday, May 24, 2015 at 7:15:41 PM UTC+10, Bruno Marchal wrote:


 On 23 May 2015, at 08:06, Pierz wrote:



 On Saturday, May 23, 2015 at 2:14:07 AM UTC+10, Bruno Marchal wrote:


 On 22 May 2015, at 10:34, Stathis Papaioannou wrote:



 On Friday, May 22, 2015, Bruno Marchal marc...@ulb.ac.be wrote:


 On 21 May 2015, at 01:53, Stathis Papaioannou wrote:



 On Wednesday, May 20, 2015, Jason Resch jasonre...@gmail.com wrote:
 snip
 Partial zombies are absurd because they make the concept of 
 consciousness meaningless. 


 OK. 


 Random neurons, separated neurons and platonic computations sustaining 
 consciousness are merely weird, not absurd.


 Not OK. Random neurone, like the movie, simply does not compute. They 
 only mimic the contingent (and logically unrelated) physical activity 
 related to a special implementation of a computation. If you change the 
 initial computer, that physical activity could mimic another computation. 
 Or, like Maudlin showed: you can change the physical activity arbitrarily, 
 and still mimic the initial computation: so the relation between 
 computation, and the physical activity of the computer running that 
 computation is accidental, nor logical.

 Platonic computation, on the contrary, does compute (in the original 
 sense of computation).


 You're assuming not only that computationalism is true, but that it's 
 exclusively true. 


 That is part of the definition, and that is why I add often that we have 
 to say yes to the doctor, in virtue of surviving qua computatio.  I 
 have often try to explain that someone can believe in both Church thesis, 
 and say yes to the doctor, but still believe in this not for the reason 
 that the artficial brain will run the relevant computation, but because he 
 believes in the Virgin Mary, and he believes she is good and compensionate, 
 so that if the artificial brain is good enough she will save your soul, and 
 reinstall it in the digital physical brain. That is *not* computationalism. 
 It is computationalism + magic. 



 Go back several steps and consider why we think computationalism might be 
 true in the first place. The usual start is that computers can behave 
 intelligently and substitute for processes in the brain.


 OK.



 So if something else can behave intelligently and substitute 
 for processes in the brain, it's not absurd to consider that it might be 
 conscious. It's begging the question to say that it can't be conscious 
 because it isn't a computation.



 The movie and the lucky random brain are different in that respect.

 The movie doesn't behave like if it was conscious. I can tell the movie 
 that mustard is a mineral, or an animal, the movie does not react. it fails 
 at the Turing tests, and the zombie test. There is neither computations, 
 nor intelligent behaviors, relevant with the consciousness associated' to 
 the boolean circuit.

 The inimagibly lucky random brain, on the contrary,  does behave in a 
 way making a person acting like a p-zombie or a conscious individual. We 
 don't see the difference with a conscious being, by definition/construction.

 Well, if a random event mimics by chance a computation, that means at the 
 least that the computation exists (in arithmetic), and I suggest to 
 associate consciousness to it. 


 I suspect you're wrong. In the case of the recording, the movie might 
 still pass the Turing test *if we invert the flukey coincidence* and 
 allow the possibility the questioner might ask questions that exactly 
 correspond to the responses that the film happens to output. I remember 
 watching a Blues Brothers midnight screening once, and all the cult fans 
 who'd go every week would yell things out at certain points in the action 
 and the actors would appear to respond to their shouted questions and 
 interjections. In this case the illusion of conversation was constructed, 
 but it could occur by chance. Would the recording then be conscious? In 
 both the random and fixed response cases, there is no actual link other 
 than coincidence between inputs and outputs, and this is the key. The 
 random brain is not responding or processing inputs at all, any more than 
 the film is. So the key to these types of thought experiments is whether 
 intelligence and consciousness are functions of the responsive relationship 
 between inputs and outputs, or merely the appearance of responsiveness. I 
 think we have to say that actual responsiveness is required, and therefore 
 fearless commit to the idea that a zombie is indeed 'possible', if the 
 infinitely unlikely is possible! I think that arguments based on 'infinite 
 improbability' (white rabbits) must surely be the weakest of all possible 
 arguments in philosophy, and should really just be dismissed out of hand. 
 Just as Deutsch argues that there are no worlds in the multiverse where 
 magic works, only some worlds where it has worked and will never work 
 again, we can admit the possibility of being 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Stathis Papaioannou
On Monday, May 25, 2015, meekerdb meeke...@verizon.net wrote:

  On 5/24/2015 1:52 AM, Bruno Marchal wrote:

 Again, with comp, all incarnations are zombie, because bodies do not
 think. It is the abstract person which thinks


 But a few thumps on the body and the abstract person won't think
 either.  So far as we have observered *only* bodies think.  If comp implies
 the contrary isn't that so much the worse for comp.


In a virtual environment, destroying the body destroys the consciousness,
but both are actually due to the underlying computations.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread meekerdb

On 5/24/2015 4:09 AM, Pierz wrote:



On Sunday, May 24, 2015 at 4:47:12 PM UTC+10, Jason wrote:



On Sun, May 24, 2015 at 12:40 AM, Pierz pie...@gmail.com javascript: 
wrote:



On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote:



On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal mar...@ulb.ac.be 
wrote:


On 19 May 2015, at 15:53, Jason Resch wrote:




On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou
stat...@gmail.com wrote:

On 19 May 2015 at 14:45, Jason Resch jason...@gmail.com 
wrote:


 On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou
stat...@gmail.com
 wrote:


 On 19 May 2015 at 11:02, Jason Resch 
jason...@gmail.com wrote:

  I think you're not taking into account the level of 
the functional
  substitution. Of course functionally equivalent 
silicon and
functionally
  equivalent neurons can (under functionalism) both 
instantiate
the same
  consciousness. But a calculator computing 2+3 cannot
substitute for a
  human
  brain computing 2+3 and produce the same consciousness.

 In a gradual replacement the substitution must obviously 
be at a
level
 sufficient to maintain the function of the whole brain. 
Sticking a
 calculator in it won't work.

  Do you think a Blockhead that was functionally 
equivalent to
you (it
  could
  fool all your friends and family in a Turing test 
scenario
into thinking
  it
  was intact you) would be conscious in the same way as 
you?

 Not necessarily, just as an actor may not be conscious 
in the
same way
 as me. But I suspect the Blockhead would be conscious; 
the intuition
 that a lookup table can't be conscious is like the 
intuition that an
 electric circuit can't be conscious.


 I don't see an equivalency between those intuitions. A 
lookup
table has a
 bounded and very low degree of computational complexity: 
all
answers to all
 queries are answered in constant time.

 While the table itself may have an arbitrarily high 
information
content,
 what in the software of the lookup table program is there 
to
 appreciate/understand/know that information?

Understanding emerges from the fact that the lookup table 
is immensely
large. It could be wrong, but I don't think it is obviously 
less
plausible than understanding emerging from a Turing machine 
made of
tin cans.



The lookup table is intelligent or at least offers the 
appearance of
intelligence, but it makes the maximum possible advantage of the
space-time trade off: 
http://en.wikipedia.org/wiki/Space–time_tradeoff
http://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff

The tin-can Turing machine is unbounded in its potential 
computational
complexity, there's no reason to be a bio- or silico-chauvinist 
against
it. However, by definition, a lookup table has near zero 
computational
complexity, no retained state.


But it is counterfactually correct on a large range spectrum. 
Of course,
it has to be infinite to be genuinely counterfactual-correct.


But the structure of the counterfactuals is identical regardless of 
the
inputs and outputs in its lookup table. If you replaced all of its 
outputs
with random strings, would that change its consciousness? What if 
there
existed a special decoding book, which was a one-time-pad that 
could decode
its random answers? Would the existence of this book make it more 
conscious
than if this book did not exist? If there is zero information 
content in the
outputs returned by the lookup table it might as well return all X
characters as its response to any query, but then would any program 
that
just returns a string of X's be conscious?

I really like this 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Bruno Marchal


On 24 May 2015, at 10:21, Stathis Papaioannou wrote:




On Saturday, May 23, 2015, Bruno Marchal marc...@ulb.ac.be wrote:

On 22 May 2015, at 10:34, Stathis Papaioannou wrote:




On Friday, May 22, 2015, Bruno Marchal marc...@ulb.ac.be wrote:

On 21 May 2015, at 01:53, Stathis Papaioannou wrote:




On Wednesday, May 20, 2015, Jason Resch jasonre...@gmail.com  
wrote:

snip
Partial zombies are absurd because they make the concept of  
consciousness meaningless.


OK.


Random neurons, separated neurons and platonic computations  
sustaining consciousness are merely weird, not absurd.


Not OK. Random neurone, like the movie, simply does not compute.  
They only mimic the contingent (and logically unrelated) physical  
activity related to a special implementation of a computation. If  
you change the initial computer, that physical activity could mimic  
another computation. Or, like Maudlin showed: you can change the  
physical activity arbitrarily, and still mimic the initial  
computation: so the relation between computation, and the physical  
activity of the computer running that computation is accidental,  
nor logical.


Platonic computation, on the contrary, does compute (in the  
original sense of computation).


You're assuming not only that computationalism is true, but that  
it's exclusively true.


That is part of the definition, and that is why I add often that we  
have to say yes to the doctor, in virtue of surviving qua  
computatio.  I have often try to explain that someone can believe  
in both Church thesis, and say yes to the doctor, but still believe  
in this not for the reason that the artficial brain will run the  
relevant computation, but because he believes in the Virgin Mary,  
and he believes she is good and compensionate, so that if the  
artificial brain is good enough she will save your soul, and  
reinstall it in the digital physical brain. That is *not*  
computationalism. It is computationalism + magic.


But it's not obvious that *only* computations can sustain  
consciousness. Maybe appropriate random behaviour can do so as well.


May be you need to add the Holy Water and the Pope benediction.


Alternatively, perhaps appropriate random behaviour would at least  
not destroy the consciousness that was there to begin with, because  
the real source of consciousness was neither the brain's normal  
physical activity nor the random activity.


That is like the movie. It keeps the relevant information to reinstal  
some instantaneous description on the boolean graph which was filmed.


Whatever possible makes the counterfactual correct in some  
sufficiently large spectrum, makes the audittor of the entity  
connected to the real person, which is an abstraction in Platonia.





I can't really see an alternative other than Russel's suggestion  
that the random activity might perfectly sustain consciousness until  
a certain point, then all consciousness would abruptly stop.


That would lead to the non sensical partial zombie. Those who says I  
don't feel any difference.


I think you illustrate my point.  If we want to avoid partial zombie,  
and keep the invariance of consciousness for the digital substitution,  
we must recognize and understand that invoking a rome of matter or god  
as a computation selector, appears as a magical explanation, like if  
we should not isolate that measure with the means of computer science,  
and then test it with the empirical physics.


Bruno




Go back several steps and consider why we think computationalism  
might be true in the first place. The usual start is that computers  
can behave intelligently and substitute for processes in the brain.


OK.



So if something else can behave intelligently and substitute for  
processes in the brain, it's not absurd to consider that it might  
be conscious. It's begging the question to say that it can't be  
conscious because it isn't a computation.



The movie and the lucky random brain are different in that respect.

The movie doesn't behave like if it was conscious. I can tell the  
movie that mustard is a mineral, or an animal, the movie does not  
react. it fails at the Turing tests, and the zombie test. There is  
neither computations, nor intelligent behaviors, relevant with the  
consciousness associated' to the boolean circuit.


The inimagibly lucky random brain, on the contrary,  does behave  
in a way making a person acting like a p-zombie or a conscious  
individual. We don't see the difference with a conscious being, by  
definition/construction.


Well, if a random event mimics by chance a computation, that means  
at the least that the computation exists (in arithmetic), and I  
suggest to associate consciousness to it.
Then if I have the way to learn that from time t1 to time t2 the  
neuron fired randomly, but correctly, by chance, that would only add  
to my suspicion that the physical activity has some relationship  
with consciousness. It is just a relative 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread meekerdb

On 5/23/2015 11:47 PM, Jason Resch wrote:
There is a common programming technique called memoization. Essentially building 
automatic caches for functions within a program. I wonder: would adding memorization to 
the functions implementing an AI eventually result in it becoming a zombie recording 
rather than a program, if it were fed all the same inputs a second time?


Isn't that exactly what happens when you learn to ride a bicycle, hit a tennis ball, touch 
type,...  Stuff you had to think about when you were learning becomes automatic - and 
subconscious.


I suspect a lot of these conundrums arise from taking consciousness to be fundamental, 
rather than a language related add-on to intelligence.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread meekerdb

On 5/24/2015 1:52 AM, Bruno Marchal wrote:
Again, with comp, all incarnations are zombie, because bodies do not think. It is the 
abstract person which thinks


But a few thumps on the body and the abstract person won't think either.  So far as we 
have observered *only* bodies think.  If comp implies the contrary isn't that so much the 
worse for comp.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Stathis Papaioannou
On Monday, May 25, 2015, Bruno Marchal marc...@ulb.ac.be wrote:


 On 24 May 2015, at 10:21, Stathis Papaioannou wrote:



 On Saturday, May 23, 2015, Bruno Marchal marc...@ulb.ac.be wrote:


 On 22 May 2015, at 10:34, Stathis Papaioannou wrote:



 On Friday, May 22, 2015, Bruno Marchal marc...@ulb.ac.be wrote:


 On 21 May 2015, at 01:53, Stathis Papaioannou wrote:



 On Wednesday, May 20, 2015, Jason Resch jasonre...@gmail.com wrote:
 snip
 Partial zombies are absurd because they make the concept of
 consciousness meaningless.


 OK.


 Random neurons, separated neurons and platonic computations sustaining
 consciousness are merely weird, not absurd.


 Not OK. Random neurone, like the movie, simply does not compute. They
 only mimic the contingent (and logically unrelated) physical activity
 related to a special implementation of a computation. If you change the
 initial computer, that physical activity could mimic another computation.
 Or, like Maudlin showed: you can change the physical activity arbitrarily,
 and still mimic the initial computation: so the relation between
 computation, and the physical activity of the computer running that
 computation is accidental, nor logical.

 Platonic computation, on the contrary, does compute (in the original
 sense of computation).


 You're assuming not only that computationalism is true, but that it's
 exclusively true.


 That is part of the definition, and that is why I add often that we have
 to say yes to the doctor, in virtue of surviving qua computatio.  I
 have often try to explain that someone can believe in both Church thesis,
 and say yes to the doctor, but still believe in this not for the reason
 that the artficial brain will run the relevant computation, but because he
 believes in the Virgin Mary, and he believes she is good and compensionate,
 so that if the artificial brain is good enough she will save your soul, and
 reinstall it in the digital physical brain. That is *not* computationalism.
 It is computationalism + magic.


 But it's not obvious that *only* computations can sustain consciousness.
 Maybe appropriate random behaviour can do so as well.


 May be you need to add the Holy Water and the Pope benediction.


I still don't understand why you think it's absurd that randomness can lead
to consciousness; not just wrong, but absurd.

 Alternatively, perhaps appropriate random behaviour would at least not
 destroy the consciousness that was there to begin with, because the real
 source of consciousness was neither the brain's normal physical activity
 nor the random activity.


 That is like the movie. It keeps the relevant information to reinstal some
 instantaneous description on the boolean graph which was filmed.

 Whatever possible makes the counterfactual correct in some sufficiently
 large spectrum, makes the audittor of the entity connected to the real
 person, which is an abstraction in Platonia.



 I can't really see an alternative other than Russel's suggestion that the
 random activity might perfectly sustain consciousness until a certain
 point, then all consciousness would abruptly stop.


 That would lead to the non sensical partial zombie. Those who says I
 don't feel any difference.

 I think you illustrate my point.  If we want to avoid partial zombie, and
 keep the invariance of consciousness for the digital substitution, we must
 recognize and understand that invoking a rome of matter or god as a
 computation selector, appears as a magical explanation, like if we should
 not isolate that measure with the means of computer science, and then test
 it with the empirical physics.

 Bruno




 Go back several steps and consider why we think computationalism might be
 true in the first place. The usual start is that computers can behave
 intelligently and substitute for processes in the brain.


 OK.



 So if something else can behave intelligently and substitute
 for processes in the brain, it's not absurd to consider that it might be
 conscious. It's begging the question to say that it can't be conscious
 because it isn't a computation.



 The movie and the lucky random brain are different in that respect.

 The movie doesn't behave like if it was conscious. I can tell the movie
 that mustard is a mineral, or an animal, the movie does not react. it fails
 at the Turing tests, and the zombie test. There is neither computations,
 nor intelligent behaviors, relevant with the consciousness associated' to
 the boolean circuit.

 The inimagibly lucky random brain, on the contrary,  does behave in a
 way making a person acting like a p-zombie or a conscious individual. We
 don't see the difference with a conscious being, by definition/construction.

 Well, if a random event mimics by chance a computation, that means at the
 least that the computation exists (in arithmetic), and I suggest to
 associate consciousness to it.
 Then if I have the way to learn that from time t1 to time t2 the neuron
 fired randomly, but 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Russell Standish
On Sun, May 24, 2015 at 07:19:46PM +0200, Bruno Marchal wrote:
 
 On 24 May 2015, at 10:21, Stathis Papaioannou wrote:
 
 
 I can't really see an alternative other than Russel's suggestion
 that the random activity might perfectly sustain consciousness
 until a certain point, then all consciousness would abruptly stop.
 
 That would lead to the non sensical partial zombie. Those who says
 I don't feel any difference.
 

Not at all. My suggestion is that there wouldn't be any partial
zombies, just normally functioning consciousness, and full zombies,
with respect to Chalmers fading qualia experiment, due to network effects.

Obviously, with functionalism (and computationalism), consciousness is
retained throughout, and no zombies appear. Chalmers was trying to
show an absurdity with non-functionalism, and I don't think it works,
except insofar as full zombies are absurd.


-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Jason Resch
On Sun, May 24, 2015 at 12:36 PM, meekerdb meeke...@verizon.net wrote:

 On 5/23/2015 11:47 PM, Jason Resch wrote:

 There is a common programming technique called memoization. Essentially
 building automatic caches for functions within a program. I wonder: would
 adding memorization to the functions implementing an AI eventually result
 in it becoming a zombie recording rather than a program, if it were fed all
 the same inputs a second time?


 Isn't that exactly what happens when you learn to ride a bicycle, hit a
 tennis ball, touch type,...  Stuff you had to think about when you were
 learning becomes automatic - and subconscious.


Interesting. I wonder if there is a connection.

Jason



 I suspect a lot of these conundrums arise from taking consciousness to be
 fundamental, rather than a language related add-on to intelligence.

 Brent


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Stathis Papaioannou
On Monday, May 25, 2015, Russell Standish li...@hpcoders.com.au wrote:

 On Sun, May 24, 2015 at 07:19:46PM +0200, Bruno Marchal wrote:
 
  On 24 May 2015, at 10:21, Stathis Papaioannou wrote:
 
  
  I can't really see an alternative other than Russel's suggestion
  that the random activity might perfectly sustain consciousness
  until a certain point, then all consciousness would abruptly stop.
 
  That would lead to the non sensical partial zombie. Those who says
  I don't feel any difference.
 

 Not at all. My suggestion is that there wouldn't be any partial
 zombies, just normally functioning consciousness, and full zombies,
 with respect to Chalmers fading qualia experiment, due to network effects.

 Obviously, with functionalism (and computationalism), consciousness is
 retained throughout, and no zombies appear. Chalmers was trying to
 show an absurdity with non-functionalism, and I don't think it works,
 except insofar as full zombies are absurd.


It could work the way you say, but it would mean the replacement neurons
would support consciousness, since if the neurons were simply removed and
not replaced consciousness would fade. So it would be a partial proof of
computationalism, since the electronic neurons would not be inert with
respect to consciousness, but could not sustain it in large enough numbers.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread meekerdb

On 5/24/2015 11:28 AM, Stathis Papaioannou wrote:



On Monday, May 25, 2015, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net 
wrote:


On 5/24/2015 1:52 AM, Bruno Marchal wrote:

Again, with comp, all incarnations are zombie, because bodies do not 
think. It is
the abstract person which thinks


But a few thumps on the body and the abstract person won't think either. 
So far as
we have observered *only* bodies think.  If comp implies the contrary isn't 
that so
much the worse for comp.


In a virtual environment, destroying the body destroys the consciousness, but both are 
actually due to the underlying computations.


How can those thumped know it's virtual.  A virtual environment with virtual people doing 
virtual actions seems to make virtual virtually meaningless.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Jason Resch
On Sun, May 24, 2015 at 3:52 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 23 May 2015, at 17:07, Jason Resch wrote:



 On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 19 May 2015, at 15:53, Jason Resch wrote:



 On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou stath...@gmail.com
  wrote:

 On 19 May 2015 at 14:45, Jason Resch jasonre...@gmail.com wrote:
 
 
  On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou 
 stath...@gmail.com
  wrote:
 
  On 19 May 2015 at 11:02, Jason Resch jasonre...@gmail.com wrote:
 
   I think you're not taking into account the level of the functional
   substitution. Of course functionally equivalent silicon and
 functionally
   equivalent neurons can (under functionalism) both instantiate the
 same
   consciousness. But a calculator computing 2+3 cannot substitute for
 a
   human
   brain computing 2+3 and produce the same consciousness.
 
  In a gradual replacement the substitution must obviously be at a level
  sufficient to maintain the function of the whole brain. Sticking a
  calculator in it won't work.
 
   Do you think a Blockhead that was functionally equivalent to you
 (it
   could
   fool all your friends and family in a Turing test scenario into
 thinking
   it
   was intact you) would be conscious in the same way as you?
 
  Not necessarily, just as an actor may not be conscious in the same way
  as me. But I suspect the Blockhead would be conscious; the intuition
  that a lookup table can't be conscious is like the intuition that an
  electric circuit can't be conscious.
 
 
  I don't see an equivalency between those intuitions. A lookup table
 has a
  bounded and very low degree of computational complexity: all answers
 to all
  queries are answered in constant time.
 
  While the table itself may have an arbitrarily high information
 content,
  what in the software of the lookup table program is there to
  appreciate/understand/know that information?

 Understanding emerges from the fact that the lookup table is immensely
 large. It could be wrong, but I don't think it is obviously less
 plausible than understanding emerging from a Turing machine made of
 tin cans.



 The lookup table is intelligent or at least offers the appearance of
 intelligence, but it makes the maximum possible advantage of the space-time
 trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff

 The tin-can Turing machine is unbounded in its potential computational
 complexity, there's no reason to be a bio- or silico-chauvinist against it.
 However, by definition, a lookup table has near zero computational
 complexity, no retained state.


 But it is counterfactually correct on a large range spectrum. Of course,
 it has to be infinite to be genuinely counterfactual-correct.


 But the structure of the counterfactuals is identical regardless of the
 inputs and outputs in its lookup table. If you replaced all of its outputs
 with random strings, would that change its consciousness? What if there
 existed a special decoding book, which was a one-time-pad that could decode
 its random answers? Would the existence of this book make it more conscious
 than if this book did not exist? If there is zero information content in
 the outputs returned by the lookup table it might as well return all X
 characters as its response to any query, but then would any program that
 just returns a string of X's be conscious?

 A lookup table might have some primitive conscious, but I think any
 consciousness it has would be more or less the same regardless of the
 number of entries within that lookup table. With more entries, its
 information content grows, but it's capacity to process, interpret, or
 understand that information remains constant.


 You can emulate the brain of Einstein with a (ridiculously  large) look-up
 table, assuming you are ridiculously patient---or we slow down your own
 brain so that you are as slow as einstein.
 Is that incarnation a zombie?

 Again, with comp, all incarnations are zombie, because bodies do not
 think. It is the abstract person which thinks, and in this case Einstein
 will still be defined by the simplest normal computations, which here,
 and only here, have taken the form of that unplausible giant Einstein
 look-up table emulation at the right level.


That last bit is the part I have difficulty with. How can a a single call
to a lookup table ever be at the right level. It seems to be with only one
lookup, the table must always operate (by definition) at the highest level,
which is probably not low enough to be at the right level. On the other
hand, if we are talking about using lookup tables to implement and, or,
nand, not, etc. then I can see a CPU based on lookup tables for these low
level operations be used to implement a conscious program. The type of
lookup table I have a problem ascribing intelligence to is the type Ned
Block described as being capable of passing a Turing test, which received
text inputs and 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Stathis Papaioannou
On 25 May 2015 at 07:51, meekerdb meeke...@verizon.net wrote:
 On 5/24/2015 11:28 AM, Stathis Papaioannou wrote:

  In a virtual environment, destroying the body destroys the consciousness,
  but both are actually due to the underlying computations.


 How can those thumped know it's virtual.  A virtual environment with virtual
 people doing virtual actions seems to make virtual virtually meaningless.

The people won't necessarily know, but they could know, as it could be
revealed by the programmers or deduced from some programming glitch
(as in the film The Thirteenth Floor). But I don't think it makes a
difference if they know or not. The answer to the obvious objection
that if you destroy the brain you destroy consciousness, so
consciousness can't reside in Platonia, is that both the brain and
consciousness could reside in Platonia.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Jason Resch
On Sun, May 24, 2015 at 6:09 AM, Pierz pier...@gmail.com wrote:



 On Sunday, May 24, 2015 at 4:47:12 PM UTC+10, Jason wrote:



 On Sun, May 24, 2015 at 12:40 AM, Pierz pie...@gmail.com wrote:



 On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote:



 On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal mar...@ulb.ac.be
 wrote:


 On 19 May 2015, at 15:53, Jason Resch wrote:



 On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou 
 stat...@gmail.com wrote:

 On 19 May 2015 at 14:45, Jason Resch jason...@gmail.com wrote:
 
 
  On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou 
 stat...@gmail.com
  wrote:

 
  On 19 May 2015 at 11:02, Jason Resch jason...@gmail.com wrote:
 
   I think you're not taking into account the level of the
 functional
   substitution. Of course functionally equivalent silicon and
 functionally
   equivalent neurons can (under functionalism) both instantiate
 the same
   consciousness. But a calculator computing 2+3 cannot substitute
 for a
   human
   brain computing 2+3 and produce the same consciousness.
 
  In a gradual replacement the substitution must obviously be at a
 level
  sufficient to maintain the function of the whole brain. Sticking a
  calculator in it won't work.
 
   Do you think a Blockhead that was functionally equivalent to
 you (it
   could
   fool all your friends and family in a Turing test scenario into
 thinking
   it
   was intact you) would be conscious in the same way as you?
 
  Not necessarily, just as an actor may not be conscious in the same
 way
  as me. But I suspect the Blockhead would be conscious; the
 intuition
  that a lookup table can't be conscious is like the intuition that
 an
  electric circuit can't be conscious.
 
 
  I don't see an equivalency between those intuitions. A lookup table
 has a
  bounded and very low degree of computational complexity: all
 answers to all
  queries are answered in constant time.
 
  While the table itself may have an arbitrarily high information
 content,
  what in the software of the lookup table program is there to
  appreciate/understand/know that information?

 Understanding emerges from the fact that the lookup table is immensely
 large. It could be wrong, but I don't think it is obviously less
 plausible than understanding emerging from a Turing machine made of
 tin cans.



 The lookup table is intelligent or at least offers the appearance of
 intelligence, but it makes the maximum possible advantage of the 
 space-time
 trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff

 The tin-can Turing machine is unbounded in its potential computational
 complexity, there's no reason to be a bio- or silico-chauvinist against 
 it.
 However, by definition, a lookup table has near zero computational
 complexity, no retained state.


 But it is counterfactually correct on a large range spectrum. Of
 course, it has to be infinite to be genuinely counterfactual-correct.


 But the structure of the counterfactuals is identical regardless of the
 inputs and outputs in its lookup table. If you replaced all of its outputs
 with random strings, would that change its consciousness? What if there
 existed a special decoding book, which was a one-time-pad that could decode
 its random answers? Would the existence of this book make it more conscious
 than if this book did not exist? If there is zero information content in
 the outputs returned by the lookup table it might as well return all X
 characters as its response to any query, but then would any program that
 just returns a string of X's be conscious?

 I really like this argument, even though I once came up with a (bad)
 attempt to refute it. I wish it received more attention because it does
 cast quite a penetrating light on the issue. What you're suggesting is
 effectively the cache pattern in computer programming, where we trade
 memory resources for computational resources. Instead of repeating a
 resource-intensive computation, we store the inputs and outputs for later
 regurgitation.


 How is this different from a movie recording of brain activity (which
 most on the list seem to agree is not conscious)? The lookup table is just
 a really long recording, only we use the input to determine to which
 section of the recording to fast-forward/rewind to.

 It isn't different to a recording. But here's the thing: when we ask if
 the lookup machine is conscious, we are kind of implicitly asking: is it
 having an experience *now*, while I ask the question and see a response.
 But what does such a question actually even mean? If a computation is
 underway in time when the machine responds, then I assume it is having a
 co-temporal experience. But the lookup machine idea forces us to the
 realization that different observers' subjective experiences (the pure
 qualia) can't be mapped to one another in objective time.


Interesting observation. Kind of like how time is completely incomparable
between two universes, consciousness (existing as a platonic 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread meekerdb

On 5/24/2015 4:27 PM, Stathis Papaioannou wrote:

On 25 May 2015 at 07:51, meekerdb meeke...@verizon.net wrote:

On 5/24/2015 11:28 AM, Stathis Papaioannou wrote:


In a virtual environment, destroying the body destroys the consciousness,
but both are actually due to the underlying computations.


How can those thumped know it's virtual.  A virtual environment with virtual
people doing virtual actions seems to make virtual virtually meaningless.

The people won't necessarily know, but they could know, as it could be
revealed by the programmers or deduced from some programming glitch
(as in the film The Thirteenth Floor). But I don't think it makes a
difference if they know or not. The answer to the obvious objection
that if you destroy the brain you destroy consciousness, so
consciousness can't reside in Platonia, is that both the brain and
consciousness could reside in Platonia.


Where ever they reside though you have to explain how damaging the brain changes 
consciousness.  And if you can explain this relation in Platonia why won't the same 
relation exist in Physicalia.


Brent






--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Jason Resch
On Sun, May 24, 2015 at 12:40 AM, Pierz pier...@gmail.com wrote:



 On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote:



 On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal mar...@ulb.ac.be wrote:


 On 19 May 2015, at 15:53, Jason Resch wrote:



 On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou stat...@gmail.com
  wrote:

 On 19 May 2015 at 14:45, Jason Resch jason...@gmail.com wrote:
 
 
  On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou 
 stat...@gmail.com
  wrote:

 
  On 19 May 2015 at 11:02, Jason Resch jason...@gmail.com wrote:
 
   I think you're not taking into account the level of the functional
   substitution. Of course functionally equivalent silicon and
 functionally
   equivalent neurons can (under functionalism) both instantiate the
 same
   consciousness. But a calculator computing 2+3 cannot substitute
 for a
   human
   brain computing 2+3 and produce the same consciousness.
 
  In a gradual replacement the substitution must obviously be at a
 level
  sufficient to maintain the function of the whole brain. Sticking a
  calculator in it won't work.
 
   Do you think a Blockhead that was functionally equivalent to you
 (it
   could
   fool all your friends and family in a Turing test scenario into
 thinking
   it
   was intact you) would be conscious in the same way as you?
 
  Not necessarily, just as an actor may not be conscious in the same
 way
  as me. But I suspect the Blockhead would be conscious; the intuition
  that a lookup table can't be conscious is like the intuition that an
  electric circuit can't be conscious.
 
 
  I don't see an equivalency between those intuitions. A lookup table
 has a
  bounded and very low degree of computational complexity: all answers
 to all
  queries are answered in constant time.
 
  While the table itself may have an arbitrarily high information
 content,
  what in the software of the lookup table program is there to
  appreciate/understand/know that information?

 Understanding emerges from the fact that the lookup table is immensely
 large. It could be wrong, but I don't think it is obviously less
 plausible than understanding emerging from a Turing machine made of
 tin cans.



 The lookup table is intelligent or at least offers the appearance of
 intelligence, but it makes the maximum possible advantage of the space-time
 trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff

 The tin-can Turing machine is unbounded in its potential computational
 complexity, there's no reason to be a bio- or silico-chauvinist against it.
 However, by definition, a lookup table has near zero computational
 complexity, no retained state.


 But it is counterfactually correct on a large range spectrum. Of course,
 it has to be infinite to be genuinely counterfactual-correct.


 But the structure of the counterfactuals is identical regardless of the
 inputs and outputs in its lookup table. If you replaced all of its outputs
 with random strings, would that change its consciousness? What if there
 existed a special decoding book, which was a one-time-pad that could decode
 its random answers? Would the existence of this book make it more conscious
 than if this book did not exist? If there is zero information content in
 the outputs returned by the lookup table it might as well return all X
 characters as its response to any query, but then would any program that
 just returns a string of X's be conscious?

 I really like this argument, even though I once came up with a (bad)
 attempt to refute it. I wish it received more attention because it does
 cast quite a penetrating light on the issue. What you're suggesting is
 effectively the cache pattern in computer programming, where we trade
 memory resources for computational resources. Instead of repeating a
 resource-intensive computation, we store the inputs and outputs for later
 regurgitation.


How is this different from a movie recording of brain activity (which most
on the list seem to agree is not conscious)? The lookup table is just a
really long recording, only we use the input to determine to which section
of the recording to fast-forward/rewind to.



 The cached results 'store' intelligence in an analogous way to the storage
 of energy as potential energy. We effectively flatten out time (the
 computational process) into the spatial dimension (memory).


But what if intelligence were not used to create the lookup table? Does the
history/creation event, which may be arbitrarily far in the past, really
play any role in a present level of consciousness? Or the reverse: the
future creation of a one-time-pad, retroactively restores intelligence
(consciousness?) to what was previously always a dumb program.


 The cache pattern does not allow us to cheat the law that intelligent work
 must be done in order to produce intelligent results, it merely allows us
 to do that work at a time that suits us. The intelligence has been
 transferred into the spatial relationships built into the 

Reconciling Random Neuron Firings and Fading Qualia

2015-05-24 Thread Stathis Papaioannou
On Saturday, May 23, 2015, Bruno Marchal marc...@ulb.ac.be wrote:


 On 22 May 2015, at 10:34, Stathis Papaioannou wrote:



 On Friday, May 22, 2015, Bruno Marchal marc...@ulb.ac.be wrote:


 On 21 May 2015, at 01:53, Stathis Papaioannou wrote:



 On Wednesday, May 20, 2015, Jason Resch jasonre...@gmail.com wrote:
 snip
 Partial zombies are absurd because they make the concept of consciousness
 meaningless.


 OK.


 Random neurons, separated neurons and platonic computations sustaining
 consciousness are merely weird, not absurd.


 Not OK. Random neurone, like the movie, simply does not compute. They
 only mimic the contingent (and logically unrelated) physical activity
 related to a special implementation of a computation. If you change the
 initial computer, that physical activity could mimic another computation.
 Or, like Maudlin showed: you can change the physical activity arbitrarily,
 and still mimic the initial computation: so the relation between
 computation, and the physical activity of the computer running that
 computation is accidental, nor logical.

 Platonic computation, on the contrary, does compute (in the original
 sense of computation).


 You're assuming not only that computationalism is true, but that it's
 exclusively true.


 That is part of the definition, and that is why I add often that we have
 to say yes to the doctor, in virtue of surviving qua computatio.  I
 have often try to explain that someone can believe in both Church thesis,
 and say yes to the doctor, but still believe in this not for the reason
 that the artficial brain will run the relevant computation, but because he
 believes in the Virgin Mary, and he believes she is good and compensionate,
 so that if the artificial brain is good enough she will save your soul, and
 reinstall it in the digital physical brain. That is *not* computationalism.
 It is computationalism + magic.


But it's not obvious that *only* computations can sustain consciousness.
Maybe appropriate random behaviour can do so as well. Alternatively,
perhaps appropriate random behaviour would at least not destroy the
consciousness that was there to begin with, because the real source of
consciousness was neither the brain's normal physical activity nor the
random activity.

I can't really see an alternative other than Russel's suggestion that the
random activity might perfectly sustain consciousness until a certain
point, then all consciousness would abruptly stop.

 Go back several steps and consider why we think computationalism might be
 true in the first place. The usual start is that computers can behave
 intelligently and substitute for processes in the brain.


 OK.



 So if something else can behave intelligently and substitute for processes
 in the brain, it's not absurd to consider that it might be conscious. It's
 begging the question to say that it can't be conscious because it isn't a
 computation.



 The movie and the lucky random brain are different in that respect.

 The movie doesn't behave like if it was conscious. I can tell the movie
 that mustard is a mineral, or an animal, the movie does not react. it fails
 at the Turing tests, and the zombie test. There is neither computations,
 nor intelligent behaviors, relevant with the consciousness associated' to
 the boolean circuit.

 The inimagibly lucky random brain, on the contrary,  does behave in a
 way making a person acting like a p-zombie or a conscious individual. We
 don't see the difference with a conscious being, by definition/construction.

 Well, if a random event mimics by chance a computation, that means at the
 least that the computation exists (in arithmetic), and I suggest to
 associate consciousness to it.
 Then if I have the way to learn that from time t1 to time t2 the neuron
 fired randomly, but correctly, by chance, that would only add to my
 suspicion that the physical activity has some relationship with
 consciousness. It is just a relative implementation of the abstract
 computation. That one should have its normal measure guarantied by the
 statistical sum on all computations below its substitution level.

 Now, the movie was a constructive object. A brain which is random but
 lucky is equivalent with a white rabbit event, and using it in a thought
 experiment might not convey so much. In this case, it seems to make my
 point that we need very special event, infinite luck or Virgin Mary, to
 resist the consequence of the idea that our consciousness is invariant for
 Turing-equivalence. Matter becomes then the symptom that some numbers win
 some (self) measure theoretical game. Comp suggests we can explain the
 appearances and relative persistence of physical realities from a
 statistical bio or psycho or theo -logy. And that is confirmed by the
 interview of the Löbian machine (by the results of Gödel, Löb, Solovay,
 Visser, ...).

 Bruno













 --
 Stathis Papaioannou

 --
 You received this message because you are subscribed to the Google 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-23 Thread Pierz


On Saturday, May 23, 2015 at 2:14:07 AM UTC+10, Bruno Marchal wrote:


 On 22 May 2015, at 10:34, Stathis Papaioannou wrote:



 On Friday, May 22, 2015, Bruno Marchal marc...@ulb.ac.be wrote:


 On 21 May 2015, at 01:53, Stathis Papaioannou wrote:



 On Wednesday, May 20, 2015, Jason Resch jasonre...@gmail.com wrote:
 snip
 Partial zombies are absurd because they make the concept of consciousness 
 meaningless. 


 OK. 


 Random neurons, separated neurons and platonic computations sustaining 
 consciousness are merely weird, not absurd.


 Not OK. Random neurone, like the movie, simply does not compute. They 
 only mimic the contingent (and logically unrelated) physical activity 
 related to a special implementation of a computation. If you change the 
 initial computer, that physical activity could mimic another computation. 
 Or, like Maudlin showed: you can change the physical activity arbitrarily, 
 and still mimic the initial computation: so the relation between 
 computation, and the physical activity of the computer running that 
 computation is accidental, nor logical.

 Platonic computation, on the contrary, does compute (in the original 
 sense of computation).


 You're assuming not only that computationalism is true, but that it's 
 exclusively true. 


 That is part of the definition, and that is why I add often that we have 
 to say yes to the doctor, in virtue of surviving qua computatio.  I 
 have often try to explain that someone can believe in both Church thesis, 
 and say yes to the doctor, but still believe in this not for the reason 
 that the artficial brain will run the relevant computation, but because he 
 believes in the Virgin Mary, and he believes she is good and compensionate, 
 so that if the artificial brain is good enough she will save your soul, and 
 reinstall it in the digital physical brain. That is *not* computationalism. 
 It is computationalism + magic. 



 Go back several steps and consider why we think computationalism might be 
 true in the first place. The usual start is that computers can behave 
 intelligently and substitute for processes in the brain.


 OK.



 So if something else can behave intelligently and substitute for processes 
 in the brain, it's not absurd to consider that it might be conscious. It's 
 begging the question to say that it can't be conscious because it isn't a 
 computation.



 The movie and the lucky random brain are different in that respect.

 The movie doesn't behave like if it was conscious. I can tell the movie 
 that mustard is a mineral, or an animal, the movie does not react. it fails 
 at the Turing tests, and the zombie test. There is neither computations, 
 nor intelligent behaviors, relevant with the consciousness associated' to 
 the boolean circuit.

 The inimagibly lucky random brain, on the contrary,  does behave in a 
 way making a person acting like a p-zombie or a conscious individual. We 
 don't see the difference with a conscious being, by definition/construction.

 Well, if a random event mimics by chance a computation, that means at the 
 least that the computation exists (in arithmetic), and I suggest to 
 associate consciousness to it. 


I suspect you're wrong. In the case of the recording, the movie might still 
pass the Turing test *if we invert the flukey coincidence* and allow the 
possibility the questioner might ask questions that exactly correspond to 
the responses that the film happens to output. I remember watching a Blues 
Brothers midnight screening once, and all the cult fans who'd go every week 
would yell things out at certain points in the action and the actors would 
appear to respond to their shouted questions and interjections. In this 
case the illusion of conversation was constructed, but it could occur by 
chance. Would the recording then be conscious? In both the random and fixed 
response cases, there is no actual link other than coincidence between 
inputs and outputs, and this is the key. The random brain is not responding 
or processing inputs at all, any more than the film is. So the key to these 
types of thought experiments is whether intelligence and consciousness are 
functions of the responsive relationship between inputs and outputs, or 
merely the appearance of responsiveness. I think we have to say that actual 
responsiveness is required, and therefore fearless commit to the idea that 
a zombie is indeed 'possible', if the infinitely unlikely is possible! I 
think that arguments based on 'infinite improbability' (white rabbits) must 
surely be the weakest of all possible arguments in philosophy, and should 
really just be dismissed out of hand. Just as Deutsch argues that there are 
no worlds in the multiverse where magic works, only some worlds where it 
has worked and will never work again, we can admit the possibility of being 
fooled into believing that a randomly jerking zombie is conscious, or a 
typewriter-jabbering monkey is the new Shakespeare, but we only 

Re: Reconciling Random Neuron Firings and Fading Qualia errata

2015-05-23 Thread meekerdb

OOPS. I meant ...randomly means NOT in accordance...

On 5/22/2015 11:15 PM, meekerdb wrote:


And note that in this context randomly means in accordance with nomologically 
determined causal probabilities.  It doesn't necessarily mean deterministically.


Brent


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-23 Thread meekerdb

On 5/22/2015 11:06 PM, Pierz wrote:


I suspect you're wrong. In the case of the recording, the movie might still pass the 
Turing test /if we invert the flukey coincidence/ and allow the possibility the 
questioner might ask questions that exactly correspond to the responses that the film 
happens to output. I remember watching a Blues Brothers midnight screening once, and all 
the cult fans who'd go every week would yell things out at certain points in the action 
and the actors would appear to respond to their shouted questions and interjections. In 
this case the illusion of conversation was constructed, but it could occur by chance. 
Would the recording then be conscious? In both the random and fixed response cases, 
there is no actual link other than coincidence between inputs and outputs, and this is 
the key. The random brain is not responding or processing inputs at all, any more than 
the film is. So the key to these types of thought experiments is whether intelligence 
and consciousness are functions of the responsive relationship between inputs and 
outputs, or merely the appearance of responsiveness. I think we have to say that actual 
responsiveness is required, and therefore fearless commit to the idea that a zombie is 
indeed 'possible', if the infinitely unlikely is possible! I think that arguments based 
on 'infinite improbability' (white rabbits) must surely be the weakest of all possible 
arguments in philosophy, and should really just be dismissed out of hand.


I agree.

Just as Deutsch argues that there are no worlds in the multiverse where magic works, 
only some worlds where it has worked and will never work again, we can admit the 
possibility of being fooled into believing that a randomly jerking zombie is conscious, 
or a typewriter-jabbering monkey is the new Shakespeare, but we only need to wait 
another second to see that we were mistaken.


An objection I foresee is that the brain's neurons and their firing are its sole 
activity, and so if they fire randomly in a by-chance correct fashion, then the random 
brain's activity is indistinguishable from a real conscious brain's activity. This is a 
red herring. How could a randomly operating brain be identical to a healthy consciously 
functioning one? For the neurons to be firing randomly, something would need to be 
physically very different and wrong about that brain. If the brain was truly physically 
identical, then of course it would no longer be firing randomly


And note that in this context randomly means in accordance with nomologically determined 
causal probabilities.  It doesn't necessarily mean deterministically.


Brent

but would /actually/ be an organised, healthy brain, assembled by chance. Such a brain 
would pass the wait a second test because it would genuinely be responding in an 
organised manner.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-23 Thread Jason Resch
On Sat, May 23, 2015 at 1:15 AM, meekerdb meeke...@verizon.net wrote:

  On 5/22/2015 11:06 PM, Pierz wrote:


  I suspect you're wrong. In the case of the recording, the movie might
 still pass the Turing test *if we invert the flukey coincidence* and
 allow the possibility the questioner might ask questions that exactly
 correspond to the responses that the film happens to output. I remember
 watching a Blues Brothers midnight screening once, and all the cult fans
 who'd go every week would yell things out at certain points in the action
 and the actors would appear to respond to their shouted questions and
 interjections. In this case the illusion of conversation was constructed,
 but it could occur by chance. Would the recording then be conscious? In
 both the random and fixed response cases, there is no actual link other
 than coincidence between inputs and outputs, and this is the key. The
 random brain is not responding or processing inputs at all, any more than
 the film is. So the key to these types of thought experiments is whether
 intelligence and consciousness are functions of the responsive relationship
 between inputs and outputs, or merely the appearance of responsiveness. I
 think we have to say that actual responsiveness is required, and therefore
 fearless commit to the idea that a zombie is indeed 'possible', if the
 infinitely unlikely is possible! I think that arguments based on 'infinite
 improbability' (white rabbits) must surely be the weakest of all possible
 arguments in philosophy, and should really just be dismissed out of hand.


 I agree.


I don't look at these rare scenarios as arguments, but as tools to refine
our understanding of computationalism. Look at what it has already done.
Bruno, Stathis, and myself -- all people who have argued for
computationalism, are now finding we disagree on issues raised by these
extreme scenarios: infinite luck, infinitely large lookup tables, etc.

Whatever else you might say of these extreme cases, I think they are
useful. They initiated this debate, which will hopefully lead to increased
clarity concerning computationalism.

Jason


  Just as Deutsch argues that there are no worlds in the multiverse where
 magic works, only some worlds where it has worked and will never work
 again, we can admit the possibility of being fooled into believing that a
 randomly jerking zombie is conscious, or a typewriter-jabbering monkey is
 the new Shakespeare, but we only need to wait another second to see that we
 were mistaken.

  An objection I foresee is that the brain's neurons and their firing are
 its sole activity, and so if they fire randomly in a by-chance correct
 fashion, then the random brain's activity is indistinguishable from a real
 conscious brain's activity. This is a red herring. How could a randomly
 operating brain be identical to a healthy consciously functioning one? For
 the neurons to be firing randomly, something would need to be physically
 very different and wrong about that brain. If the brain was truly
 physically identical, then of course it would no longer be firing randomly


 And note that in this context randomly means in accordance with
 nomologically determined causal probabilities.  It doesn't necessarily mean
 deterministically.

 Brent

  but would *actually* be an organised, healthy brain, assembled by
 chance. Such a brain would pass the wait a second test because it would
 genuinely be responding in an organised manner.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-23 Thread Jason Resch
On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 19 May 2015, at 15:53, Jason Resch wrote:



 On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou stath...@gmail.com
 wrote:

 On 19 May 2015 at 14:45, Jason Resch jasonre...@gmail.com wrote:
 
 
  On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou 
 stath...@gmail.com
  wrote:
 
  On 19 May 2015 at 11:02, Jason Resch jasonre...@gmail.com wrote:
 
   I think you're not taking into account the level of the functional
   substitution. Of course functionally equivalent silicon and
 functionally
   equivalent neurons can (under functionalism) both instantiate the
 same
   consciousness. But a calculator computing 2+3 cannot substitute for a
   human
   brain computing 2+3 and produce the same consciousness.
 
  In a gradual replacement the substitution must obviously be at a level
  sufficient to maintain the function of the whole brain. Sticking a
  calculator in it won't work.
 
   Do you think a Blockhead that was functionally equivalent to you
 (it
   could
   fool all your friends and family in a Turing test scenario into
 thinking
   it
   was intact you) would be conscious in the same way as you?
 
  Not necessarily, just as an actor may not be conscious in the same way
  as me. But I suspect the Blockhead would be conscious; the intuition
  that a lookup table can't be conscious is like the intuition that an
  electric circuit can't be conscious.
 
 
  I don't see an equivalency between those intuitions. A lookup table has
 a
  bounded and very low degree of computational complexity: all answers to
 all
  queries are answered in constant time.
 
  While the table itself may have an arbitrarily high information content,
  what in the software of the lookup table program is there to
  appreciate/understand/know that information?

 Understanding emerges from the fact that the lookup table is immensely
 large. It could be wrong, but I don't think it is obviously less
 plausible than understanding emerging from a Turing machine made of
 tin cans.



 The lookup table is intelligent or at least offers the appearance of
 intelligence, but it makes the maximum possible advantage of the space-time
 trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff

 The tin-can Turing machine is unbounded in its potential computational
 complexity, there's no reason to be a bio- or silico-chauvinist against it.
 However, by definition, a lookup table has near zero computational
 complexity, no retained state.


 But it is counterfactually correct on a large range spectrum. Of course,
 it has to be infinite to be genuinely counterfactual-correct.


But the structure of the counterfactuals is identical regardless of the
inputs and outputs in its lookup table. If you replaced all of its outputs
with random strings, would that change its consciousness? What if there
existed a special decoding book, which was a one-time-pad that could decode
its random answers? Would the existence of this book make it more conscious
than if this book did not exist? If there is zero information content in
the outputs returned by the lookup table it might as well return all X
characters as its response to any query, but then would any program that
just returns a string of X's be conscious?

A lookup table might have some primitive conscious, but I think any
consciousness it has would be more or less the same regardless of the
number of entries within that lookup table. With more entries, its
information content grows, but it's capacity to process, interpret, or
understand that information remains constant.



 Does an ant trained to perform the look table's operation become more
 aware when placed in a vast library than when placed on a small bookshelf,
 to perform the identical function?


 Are you not doing the Searle's level confusion?


I see the close parallel, but I hope not. The input to the ant when
interpreted as a binary string is a number, that tells the ant how many
pages to walk past to get to the page containing the answer, where the ant
stops the paper is read. I don't see how this system consisting of the ant,
and the library, is conscious. The system is intelligent, in that it
provides meaningful answers to queries, but it processes no information
besides evaluating the magnitude of an input (represented as a number) and
then jumping to that offset to read that memory location. Can there be
consciousness in a simple A implies B relation?


  The consciousness (if there is one) is the consciousness of the person,
 incarnated in the program. It is not the consciousness of the low level
 processor, no more than the physicality which supports the ant and the
 table.

 Again, with comp there is never any problem with all of this. The
 consciousness is an immaterial attribute of an immaterial program/machine's
 soul, which is defined exclusively by a class of true number relations.


While I can see certain very complex number relations leading to 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-23 Thread Jason Resch
On Wed, May 20, 2015 at 4:20 AM, Stathis Papaioannou stath...@gmail.com
wrote:



 On Tuesday, 19 May 2015, Jason Resch jasonre...@gmail.com wrote:

 On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou stath...@gmail.com
 wrote:



  Not necessarily, just as an actor may not be conscious in the same way
  as me. But I suspect the Blockhead would be conscious; the intuition
  that a lookup table can't be conscious is like the intuition that an
  electric circuit can't be conscious.
 
 
  I don't see an equivalency between those intuitions. A lookup table
 has a
  bounded and very low degree of computational complexity: all answers
 to all
  queries are answered in constant time.
 
  While the table itself may have an arbitrarily high information
 content,
  what in the software of the lookup table program is there to
  appreciate/understand/know that information?

 Understanding emerges from the fact that the lookup table is immensely
 large. It could be wrong, but I don't think it is obviously less
 plausible than understanding emerging from a Turing machine made of
 tin cans.



 The lookup table is intelligent or at least offers the appearance of
 intelligence, but it makes the maximum possible advantage of the space-time
 trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff

 The tin-can Turing machine is unbounded in its potential computational
 complexity, there's no reason to be a bio- or silico-chauvinist against it.
 However, by definition, a lookup table has near zero computational
 complexity, no retained state. Does an ant trained to perform the look
 table's operation become more aware when placed in a vast library than when
 placed on a small bookshelf, to perform the identical function?


 The ant is more aware than a neuron but it is not the ant's awareness that
 is at issue, it is the system of which the ant is a part.

 Step back and consider why we speculate that computationalism may be true.
 It is not because computers are complex like brains, or because brains
 carry out computations like computers. It is because animals with brains
 display intelligent behaviour, and computers also display intelligent
 behaviour, or at least might in the future. If Blockheads roamed the Earth
 answering all our questions, then surely we would debate whether they were
 conscious like us, whether they have feelings and whether they should be
 accorded human rights.



I would not torture a blockhead nor refuse to serve one in my restaurant,
but I might caution my daughter before marrying one that it might be a
zombie. I know I sound like Craig in saying this but I see a difference in
kind between the programs, even if they have an equivalence in inputs and
outputs  at some high layer. There, is, for instance, no society of mind
or modularity of mind as Minsky and Fodor spoke of. Here there is only a
top level defintion of high level inputs and outputs.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-23 Thread Pierz


On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote:



 On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal mar...@ulb.ac.be 
 javascript: wrote:


 On 19 May 2015, at 15:53, Jason Resch wrote:



 On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou stat...@gmail.com 
 javascript: wrote:

 On 19 May 2015 at 14:45, Jason Resch jason...@gmail.com javascript: 
 wrote:
 
 
  On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou 
 stat...@gmail.com javascript:
  wrote:
 
  On 19 May 2015 at 11:02, Jason Resch jason...@gmail.com 
 javascript: wrote:
 
   I think you're not taking into account the level of the functional
   substitution. Of course functionally equivalent silicon and 
 functionally
   equivalent neurons can (under functionalism) both instantiate the 
 same
   consciousness. But a calculator computing 2+3 cannot substitute for 
 a
   human
   brain computing 2+3 and produce the same consciousness.
 
  In a gradual replacement the substitution must obviously be at a level
  sufficient to maintain the function of the whole brain. Sticking a
  calculator in it won't work.
 
   Do you think a Blockhead that was functionally equivalent to you 
 (it
   could
   fool all your friends and family in a Turing test scenario into 
 thinking
   it
   was intact you) would be conscious in the same way as you?
 
  Not necessarily, just as an actor may not be conscious in the same way
  as me. But I suspect the Blockhead would be conscious; the intuition
  that a lookup table can't be conscious is like the intuition that an
  electric circuit can't be conscious.
 
 
  I don't see an equivalency between those intuitions. A lookup table 
 has a
  bounded and very low degree of computational complexity: all answers 
 to all
  queries are answered in constant time.
 
  While the table itself may have an arbitrarily high information 
 content,
  what in the software of the lookup table program is there to
  appreciate/understand/know that information?

 Understanding emerges from the fact that the lookup table is immensely
 large. It could be wrong, but I don't think it is obviously less
 plausible than understanding emerging from a Turing machine made of
 tin cans.



 The lookup table is intelligent or at least offers the appearance of 
 intelligence, but it makes the maximum possible advantage of the space-time 
 trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff

 The tin-can Turing machine is unbounded in its potential computational 
 complexity, there's no reason to be a bio- or silico-chauvinist against it. 
 However, by definition, a lookup table has near zero computational 
 complexity, no retained state. 


 But it is counterfactually correct on a large range spectrum. Of course, 
 it has to be infinite to be genuinely counterfactual-correct. 


 But the structure of the counterfactuals is identical regardless of the 
 inputs and outputs in its lookup table. If you replaced all of its outputs 
 with random strings, would that change its consciousness? What if there 
 existed a special decoding book, which was a one-time-pad that could decode 
 its random answers? Would the existence of this book make it more conscious 
 than if this book did not exist? If there is zero information content in 
 the outputs returned by the lookup table it might as well return all X 
 characters as its response to any query, but then would any program that 
 just returns a string of X's be conscious?

 I really like this argument, even though I once came up with a (bad) 
attempt to refute it. I wish it received more attention because it does 
cast quite a penetrating light on the issue. What you're suggesting is 
effectively the cache pattern in computer programming, where we trade 
memory resources for computational resources. Instead of repeating a 
resource-intensive computation, we store the inputs and outputs for later 
regurgitation. The cached results 'store' intelligence in an analogous way 
to the storage of energy as potential energy. We effectively flatten out 
time (the computational process) into the spatial dimension (memory). The 
cache pattern does not allow us to cheat the law that intelligent work must 
be done in order to produce intelligent results, it merely allows us to do 
that work at a time that suits us. The intelligence has been transferred 
into the spatial relationships built into the table, intelligent 
relationships we can only discover by doing the computations. The lookup 
table is useless without its index. So what your thought experiment points 
out is pretty fascinating: that intelligence can be manifested spatially as 
well as temporally, contrary to our common-sense intuition, and that the 
intelligence of a machine does not have to be in real time. That actually 
supports the MGA if anything - because computations are abstractions 
outside of time and space. We should not forget that the memory resources 
required to duplicate any kind of intelligent computer would be 

Reconciling Random Neuron Firings and Fading Qualia

2015-05-22 Thread Stathis Papaioannou
On Friday, May 22, 2015, Bruno Marchal marc...@ulb.ac.be
javascript:_e(%7B%7D,'cvml','marc...@ulb.ac.be'); wrote:


 On 21 May 2015, at 01:53, Stathis Papaioannou wrote:



 On Wednesday, May 20, 2015, Jason Resch jasonre...@gmail.com wrote:
 snip
 Partial zombies are absurd because they make the concept of consciousness
 meaningless.


 OK.


 Random neurons, separated neurons and platonic computations sustaining
 consciousness are merely weird, not absurd.


 Not OK. Random neurone, like the movie, simply does not compute. They only
 mimic the contingent (and logically unrelated) physical activity related to
 a special implementation of a computation. If you change the initial
 computer, that physical activity could mimic another computation. Or, like
 Maudlin showed: you can change the physical activity arbitrarily, and still
 mimic the initial computation: so the relation between computation, and the
 physical activity of the computer running that computation is accidental,
 nor logical.

 Platonic computation, on the contrary, does compute (in the original sense
 of computation).


You're assuming not only that computationalism is true, but that it's
exclusively true.

Go back several steps and consider why we think computationalism might be
true in the first place. The usual start is that computers can behave
intelligently and substitute for processes in the brain. So if something
else can behave intelligently and substitute for processes in the brain,
it's not absurd to consider that it might be conscious. It's begging the
question to say that it can't be conscious because it isn't a computation.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-22 Thread Bruno Marchal


On 22 May 2015, at 10:34, Stathis Papaioannou wrote:




On Friday, May 22, 2015, Bruno Marchal marc...@ulb.ac.be wrote:

On 21 May 2015, at 01:53, Stathis Papaioannou wrote:




On Wednesday, May 20, 2015, Jason Resch jasonre...@gmail.com wrote:
snip
Partial zombies are absurd because they make the concept of  
consciousness meaningless.


OK.


Random neurons, separated neurons and platonic computations  
sustaining consciousness are merely weird, not absurd.


Not OK. Random neurone, like the movie, simply does not compute.  
They only mimic the contingent (and logically unrelated) physical  
activity related to a special implementation of a computation. If  
you change the initial computer, that physical activity could mimic  
another computation. Or, like Maudlin showed: you can change the  
physical activity arbitrarily, and still mimic the initial  
computation: so the relation between computation, and the physical  
activity of the computer running that computation is accidental, nor  
logical.


Platonic computation, on the contrary, does compute (in the original  
sense of computation).


You're assuming not only that computationalism is true, but that  
it's exclusively true.


That is part of the definition, and that is why I add often that we  
have to say yes to the doctor, in virtue of surviving qua  
computatio.  I have often try to explain that someone can believe in  
both Church thesis, and say yes to the doctor, but still believe in  
this not for the reason that the artficial brain will run the relevant  
computation, but because he believes in the Virgin Mary, and he  
believes she is good and compensionate, so that if the artificial  
brain is good enough she will save your soul, and reinstall it in the  
digital physical brain. That is *not* computationalism. It is  
computationalism + magic.





Go back several steps and consider why we think computationalism  
might be true in the first place. The usual start is that computers  
can behave intelligently and substitute for processes in the brain.


OK.



So if something else can behave intelligently and substitute for  
processes in the brain, it's not absurd to consider that it might be  
conscious. It's begging the question to say that it can't be  
conscious because it isn't a computation.



The movie and the lucky random brain are different in that respect.

The movie doesn't behave like if it was conscious. I can tell the  
movie that mustard is a mineral, or an animal, the movie does not  
react. it fails at the Turing tests, and the zombie test. There is  
neither computations, nor intelligent behaviors, relevant with the  
consciousness associated' to the boolean circuit.


The inimagibly lucky random brain, on the contrary,  does behave in  
a way making a person acting like a p-zombie or a conscious  
individual. We don't see the difference with a conscious being, by  
definition/construction.


Well, if a random event mimics by chance a computation, that means at  
the least that the computation exists (in arithmetic), and I suggest  
to associate consciousness to it.
Then if I have the way to learn that from time t1 to time t2 the  
neuron fired randomly, but correctly, by chance, that would only add  
to my suspicion that the physical activity has some relationship with  
consciousness. It is just a relative implementation of the abstract  
computation. That one should have its normal measure guarantied by the  
statistical sum on all computations below its substitution level.


Now, the movie was a constructive object. A brain which is random but  
lucky is equivalent with a white rabbit event, and using it in a  
thought experiment might not convey so much. In this case, it seems to  
make my point that we need very special event, infinite luck or Virgin  
Mary, to resist the consequence of the idea that our consciousness is  
invariant for Turing-equivalence. Matter becomes then the symptom that  
some numbers win some (self) measure theoretical game. Comp suggests  
we can explain the appearances and relative persistence of physical  
realities from a statistical bio or psycho or theo -logy. And that is  
confirmed by the interview of the Löbian machine (by the results of  
Gödel, Löb, Solovay, Visser, ...).


Bruno














--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-21 Thread Stathis Papaioannou
On 21 May 2015 at 14:27, Terren Suydam terren.suy...@gmail.com wrote:
 That's the point though, I do notice a change, from not being conscious of
 driving to becoming conscious of it again. And there's no denying the
 difference between the two.

But that's something that normally happens. There are many such
examples: a gradual change in your environment that you don't notice,
perhaps the fading of qualia over time as brains age and neurons die
that you don't notice. If you made a change to the brain and these
sorts of examples became more common or more pronounced, that would
involve a change in behaviour from the normal brain, as well as a
change in qualia. You would be driving with no awareness of driving,
and when asked about it you would say you were aware of driving, but
you would be either delusional or lying.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-21 Thread Terren Suydam
I take your point that the partial zombie in my example is a different
scenario from the neuron-replacement one... just trying to make sense of
what it could mean to notice being a zombie (after the fact), in a way
that's not absurd and doesn't suffer from the problem that the ability to
notice it invalidates the scenario.

Terren

On Thu, May 21, 2015 at 2:06 AM, Stathis Papaioannou stath...@gmail.com
wrote:

 On 21 May 2015 at 14:27, Terren Suydam terren.suy...@gmail.com wrote:
  That's the point though, I do notice a change, from not being conscious
 of
  driving to becoming conscious of it again. And there's no denying the
  difference between the two.

 But that's something that normally happens. There are many such
 examples: a gradual change in your environment that you don't notice,
 perhaps the fading of qualia over time as brains age and neurons die
 that you don't notice. If you made a change to the brain and these
 sorts of examples became more common or more pronounced, that would
 involve a change in behaviour from the normal brain, as well as a
 change in qualia. You would be driving with no awareness of driving,
 and when asked about it you would say you were aware of driving, but
 you would be either delusional or lying.


 --
 Stathis Papaioannou

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-21 Thread Bruno Marchal


On 21 May 2015, at 00:28, meekerdb wrote:


On 5/19/2015 10:44 AM, Bruno Marchal wrote:
The tin-can Turing machine is unbounded in its potential  
computational complexity, there's no reason to be a bio- or silico- 
chauvinist against it. However, by definition, a lookup table has  
near zero computational complexity, no retained state.


But it is counterfactually correct on a large range spectrum. Of  
course, it has to be infinite to be genuinely counterfactual-correct.


When programs are written that determine human safety (e.g. aircraft  
flight controls) they are written languages that can be proven  
correct - in the sense of correctly implementing a specification and  
not being able to prove a contradiction.  Is there some theorem  
that proves such provably correct programs cannot be intelligent?


They are intelligent, like a pebble, i.e. in the very large sense. You  
will not hear such a program proving/claiming its own intelligence, of  
making gossips on the pilot.


Bruno





Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-21 Thread Bruno Marchal


On 21 May 2015, at 01:53, Stathis Papaioannou wrote:




On Wednesday, May 20, 2015, Jason Resch jasonre...@gmail.com wrote:
snip
Partial zombies are absurd because they make the concept of  
consciousness meaningless.


OK.


Random neurons, separated neurons and platonic computations  
sustaining consciousness are merely weird, not absurd.


Not OK. Random neurone, like the movie, simply does not compute. They  
only mimic the contingent (and logically unrelated) physical activity  
related to a special implementation of a computation. If you change  
the initial computer, that physical activity could mimic another  
computation. Or, like Maudlin showed: you can change the physical  
activity arbitrarily, and still mimic the initial computation: so the  
relation between computation, and the physical activity of the  
computer running that computation is accidental, nor logical.


Platonic computation, on the contrary, does compute (in the original  
sense of computation).


Bruno





--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-20 Thread Bruno Marchal


On 19 May 2015, at 04:21, Stathis Papaioannou wrote:


On 19 May 2015 at 11:02, Jason Resch jasonre...@gmail.com wrote:


I think you're not taking into account the level of the functional
substitution. Of course functionally equivalent silicon and  
functionally
equivalent neurons can (under functionalism) both instantiate the  
same
consciousness. But a calculator computing 2+3 cannot substitute for  
a human

brain computing 2+3 and produce the same consciousness.


In a gradual replacement the substitution must obviously be at a level
sufficient to maintain the function of the whole brain. Sticking a
calculator in it won't work.

Do you think a Blockhead that was functionally equivalent to you  
(it could
fool all your friends and family in a Turing test scenario into  
thinking it

was intact you) would be conscious in the same way as you?


Not necessarily, just as an actor may not be conscious in the same way
as me. But I suspect the Blockhead would be conscious; the intuition
that a lookup table can't be conscious is like the intuition that an
electric circuit can't be conscious.


Hmm This is still a misleading way to talk.

Only a person can be conscious, and an electric circuit can support  
the relative manifestation of that person, like a sufficiently big  
look-up table.
Then it is obvious that movies and random neurons can't *support* a  
person, even if at some stage they can be used to reinstanciate the  
consciousness ability to manifeste. Nobody will say yes to a doctopr  
who propose a random brain, nor a movie.


Two cats in a room can represent the number two, but the number two  
does not need the cat to make sense, it needs the two cats only to  
manifest itself in that room. What lives in our brain is far more  
complex than the number 2, but the association is similar, and we,  
from our own perspective are associate to many computations.


Bruno






--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-20 Thread Bruno Marchal


On 19 May 2015, at 15:53, Jason Resch wrote:




On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou stath...@gmail.com 
 wrote:

On 19 May 2015 at 14:45, Jason Resch jasonre...@gmail.com wrote:


 On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou stath...@gmail.com 


 wrote:

 On 19 May 2015 at 11:02, Jason Resch jasonre...@gmail.com wrote:

  I think you're not taking into account the level of the  
functional
  substitution. Of course functionally equivalent silicon and  
functionally
  equivalent neurons can (under functionalism) both instantiate  
the same
  consciousness. But a calculator computing 2+3 cannot substitute  
for a

  human
  brain computing 2+3 and produce the same consciousness.

 In a gradual replacement the substitution must obviously be at a  
level

 sufficient to maintain the function of the whole brain. Sticking a
 calculator in it won't work.

  Do you think a Blockhead that was functionally equivalent to  
you (it

  could
  fool all your friends and family in a Turing test scenario into  
thinking

  it
  was intact you) would be conscious in the same way as you?

 Not necessarily, just as an actor may not be conscious in the  
same way
 as me. But I suspect the Blockhead would be conscious; the  
intuition
 that a lookup table can't be conscious is like the intuition that  
an

 electric circuit can't be conscious.


 I don't see an equivalency between those intuitions. A lookup  
table has a
 bounded and very low degree of computational complexity: all  
answers to all

 queries are answered in constant time.

 While the table itself may have an arbitrarily high information  
content,

 what in the software of the lookup table program is there to
 appreciate/understand/know that information?

Understanding emerges from the fact that the lookup table is immensely
large. It could be wrong, but I don't think it is obviously less
plausible than understanding emerging from a Turing machine made of
tin cans.



The lookup table is intelligent or at least offers the appearance of  
intelligence, but it makes the maximum possible advantage of the  
space-time trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff


The tin-can Turing machine is unbounded in its potential  
computational complexity, there's no reason to be a bio- or silico- 
chauvinist against it. However, by definition, a lookup table has  
near zero computational complexity, no retained state.


But it is counterfactually correct on a large range spectrum. Of  
course, it has to be infinite to be genuinely counterfactual-correct.



Does an ant trained to perform the look table's operation become  
more aware when placed in a vast library than when placed on a small  
bookshelf, to perform the identical function?


Are you not doing the Searle's level confusion?  The consciousness (if  
there is one) is the consciousness of the person, incarnated in the  
program. It is not the consciousness of the low level processor, no  
more than the physicality which supports the ant and the table.


Again, with comp there is never any problem with all of this. The  
consciousness is an immaterial attribute of an immaterial program/ 
machine's soul, which is defined exclusively by a class of true number  
relations.


The task of a 3p machine consists only in associating that  
consciousness to your local reality, but the body of the machine, or  
whatever 3p you can associate to the machine, is not conscious, and,  
to be sure, does not even exist as such.


I am aware it is hard to swallow, but there is no contradiction (so  
far). And to keep comp, and avoid attributing a mind or worst a  
partial mind to people without brain, or to movie (which handles  
only very simple computations (projections)), I don't see any other  
option (but fake magic).


It is perhaps helpful to see that this reversal makes a theory like  
Robinson arithmetic  into a TOE, and to start directly with it.


In that case all what we deal with is defined in term of arithmetical  
formula, that is, in term of 0, s, + and *.


The handling of the difference between object and their description is  
made explicit, through the coding, or Gödel-numbering, or programming,  
of the object concerned.


For example, in the combinators, the number 0 is some times defined by  
the combinator SKK, the expression SKK can be represented by a Gödel  
number (in many different way), attributing to 0 a rather big number  
representing its definition, and this distinguish well 0 and its  
representation (which will be a rather big number). Proceeding in this  
way, we avoid the easy confusions between the object level and the  
metalevel, and we can even mixed them in a clean way, as the metalevel  
embeds itself at the object level (which is what made Gödel and Löb's  
arithmetical self-reference possible to start with).


I would have believed this almost refute comp, if there were not that  
quantum-Everett confirmation of that admittedly shocking 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-20 Thread Bruno Marchal


On 19 May 2015, at 23:09, meekerdb wrote:


On 5/19/2015 11:47 AM, Terren Suydam wrote:
While I applaud IIT because it seems to be the first theory of  
consciousness that takes information architecture seriously (and  
thus situating theoretical considerations in a holistic rather than  
reductionist context) and to make predictions based on that, I  
agree with Aaronson's criticisms of it - namely, that IIT predicts  
that certain classes of computational systems that we intuitively  
would fail to see as conscious get measures of consciousness  
potentially higher than for human brains.


One key feature of consciousness as we know it is ongoing  
subjective experience. So a question I keep coming back to in my  
own thinking is, what kind of information architecture lends itself  
to a flow of data, such that if we assume that consciousness is  
how data feels as it's processed, we might imagine it could  
correspond to ongoing subjective experience?  It seems to me that  
such an architecture would have, at a bare minimum, its current  
state recursively fed back into itself to be processed in the next  
iteration. This happens in a trivial way in any processor chip (or  
lookup table AI for that matter). As such, there may be a very  
trivial sort of consciousness associated with a processor   
or lookup table, but this does not get us anywhere near  
understanding the richness of human consciousness.


I think you need to consider what would be the benefit of this  
recursion.  How could it be naturally selected.  Jeff Hawkins idea  
is that the brain continually tries to anticipate, at the perceptual  
level and even in lower layers of the cerebral cortex.


I think that is the case, and it is the case for the []p  t, which  
is a form of bet/anticipation. That idea is the base of Helmholtz  
theory of perception, and many experience in psychology confirms this  
idea.




  Then signals that don't match the prediction get broadcast more  
widely at the next higher level where they may have been anticipated  
by other neurons.  At the highest level (he says there are six in  
the cortext as I recall) signals spread to language and visual  
modules and one becomes aware of them or they spring to mind.  
This would have the advantage of directing computational resources  
to that which is novel, while leaving familiar things to learned  
responses.  To this I would add that the novel/conscious experience  
is given some value, e.g. emotional weight, which makes it more or  
less strongly remembered. And of course it isn't remembered like  
recording; it's synopsized in terms of it's connection to other  
remembered events. This memory is needed for learning from experience.


OK.
Of course the loop Terren alluded too is built in in the [], and the  
usual self-reference brought bu the use of the second recursion  
theorem, or the Dx = xx trick. Such loop are the technical base of all  
the modal logics of self-reference. The second recursion theorem is  
hidden in the proof of Solovay's theorem.


Bruno




Brent



An architecture that supports that richness - the subjective  
experience, IOW, of an embodied sensing agent - would involve that  
recursion but at a holistic level. The entire system, potentially,  
including the system's informational representations of sensory  
data (whatever form that took) would be involved in that feedback  
loop. So the phi of IIT has a role here, as the processor/lookup  
table architecture has a low phi.


What is missing from phi is a measure of recursion - how the  
modules of a system feedback in such a way as to create a systemic,  
recursive processing loop. My hunch is that this would address  
Aaronson's objections, as brains would score high on this measure  
but the systems that Aaronson complains about, such as systems  
that do nothing but apply a low-density parity-check code, or other  
simple transformations of their input data would score low due to  
lack of recursion.


Terren

On Tue, May 19, 2015 at 12:23 PM, meekerdb meeke...@verizon.net  
wrote:

On 5/19/2015 6:47 AM, Jason Resch wrote:



On Mon, May 18, 2015 at 11:54 PM, meekerdb meeke...@verizon.net  
wrote:

On 5/18/2015 9:45 PM, Jason Resch wrote:
Not necessarily, just as an actor may not be conscious in the  
same way
as me. But I suspect the Blockhead would be conscious; the  
intuition
that a lookup table can't be conscious is like the intuition that  
an

electric circuit can't be conscious.


I don't see an equivalency between those intuitions. A lookup  
table has a bounded and very low degree of computational  
complexity: all answers to all queries are answered in constant  
time.


While the table itself may have an arbitrarily high information  
content, what in the software of the lookup table program is  
there to appreciate/understand/know that information?


What is there is there in a neural network?


A computational state containing significant information 

Reconciling Random Neuron Firings and Fading Qualia

2015-05-20 Thread Stathis Papaioannou
On Tuesday, 19 May 2015, Jason Resch jasonre...@gmail.com wrote:

On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou stath...@gmail.com
 wrote:



  Not necessarily, just as an actor may not be conscious in the same way
  as me. But I suspect the Blockhead would be conscious; the intuition
  that a lookup table can't be conscious is like the intuition that an
  electric circuit can't be conscious.
 
 
  I don't see an equivalency between those intuitions. A lookup table has
 a
  bounded and very low degree of computational complexity: all answers to
 all
  queries are answered in constant time.
 
  While the table itself may have an arbitrarily high information content,
  what in the software of the lookup table program is there to
  appreciate/understand/know that information?

 Understanding emerges from the fact that the lookup table is immensely
 large. It could be wrong, but I don't think it is obviously less
 plausible than understanding emerging from a Turing machine made of
 tin cans.



 The lookup table is intelligent or at least offers the appearance of
 intelligence, but it makes the maximum possible advantage of the space-time
 trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff

 The tin-can Turing machine is unbounded in its potential computational
 complexity, there's no reason to be a bio- or silico-chauvinist against it.
 However, by definition, a lookup table has near zero computational
 complexity, no retained state. Does an ant trained to perform the look
 table's operation become more aware when placed in a vast library than when
 placed on a small bookshelf, to perform the identical function?


The ant is more aware than a neuron but it is not the ant's awareness that
is at issue, it is the system of which the ant is a part.

Step back and consider why we speculate that computationalism may be true.
It is not because computers are complex like brains, or because brains
carry out computations like computers. It is because animals with brains
display intelligent behaviour, and computers also display intelligent
behaviour, or at least might in the future. If Blockheads roamed the Earth
answering all our questions, then surely we would debate whether they were
conscious like us, whether they have feelings and whether they should be
accorded human rights.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-20 Thread Stathis Papaioannou
On Wednesday, May 20, 2015, Jason Resch jasonre...@gmail.com wrote:



 On Tue, May 19, 2015 at 12:54 AM, Stathis Papaioannou stath...@gmail.com
 javascript:_e(%7B%7D,'cvml','stath...@gmail.com'); wrote:

 On 19 May 2015 at 11:05, Jason Resch jasonre...@gmail.com
 javascript:_e(%7B%7D,'cvml','jasonre...@gmail.com'); wrote:
 
 
  On Mon, May 18, 2015 at 10:05 AM, Stathis Papaioannou 
 stath...@gmail.com javascript:_e(%7B%7D,'cvml','stath...@gmail.com');
  wrote:
 
 
 
  On Tuesday, May 19, 2015, Bruno Marchal marc...@ulb.ac.be
 javascript:_e(%7B%7D,'cvml','marc...@ulb.ac.be'); wrote:
 
 
  On 16 May 2015, at 07:10, Stathis Papaioannou wrote:
 
 
 
 
 
  On 13 May 2015, at 11:59 am, Jason Resch jasonre...@gmail.com
 javascript:_e(%7B%7D,'cvml','jasonre...@gmail.com'); wrote:
 
  Chalmer's fading quailia argument shows that if replacing a biological
  neuron with a functionally equivalent silicon neuron changed conscious
  perception, then it would lead to an absurdity, either:
  1. quaila fade/change as silicon neurons gradually replace the
 biological
  ones, leading to a case where the quaila are being completely out of
 touch
  with the functional state of the brain.
  or
  2. the replacement eventually leads to a sudden and complete loss of
 all
  quaila, but this suggests a single neuron, or even a few molecules of
 that
  neuron, when substituted, somehow completely determine the presence of
  quaila
 
  His argument is convincing, but what happens when we replace neurons
 not
  with functionally identical ones, but with neurons that fire
 according to a
  RNG. In all but 1 case, the random firings of the neurons will result
 in
  completely different behaviors, but what about that 1 (immensely
 rare) case
  where the random neuron firings (by chance) equal the firing patterns
 of the
  substituted neurons.
 
  In this case, behavior as observed from the outside is identical.
 Brain
  patterns and activity are similar, but according to computationalism
 the
  consciousness is different, or perhaps a zombie (if all neurons are
 replaced
  with random firing neurons). Presume that the activity of neurons in
 the
  visual cortex is required for visual quaila, and that all neurons in
 the
  visual cortex are replaced with random firing neurons, which by
 chance,
  mimic the behavior of neurons when viewing an apple.
 
  Is this not an example of fading quaila, or quaila desynchronized from
  the brain state? Would this person feel that they are blind, or lack
 visual
  quaila, all the while not being able to express their deficiency? I
 used to
  think when Searle argued this exact same thing would occur when
 substituted
  functionally identical biological neurons with artificial neurons
 that it
  was completely ridiculous, for there would be no room in the
 functionally
  equivalent brain to support thoughts such as help! I can't see, I am
  blind! for the information content in the brain is identical when the
  neurons are functionally identical.
 
  But then how does this reconcile with fading quaila as the result of
  substituting randomly firing neurons? The computations are not the
 same, so
  presumably the consciousness is not the same. But also, the
 information
  content does not support knowing/believing/expressing/thinking
 something is
  wrong. If anything, the information content of this random brain is
 much
  less, but it seems the result is something where the quaila is out of
 sync
  with the global state of the brain. Can anyone else where shed some
 clarity
  on what they think happens, and how to explain it in the rare case of
  luckily working randomly firing neurons, when only partial
 substitutions of
  the neurons in a brain is performed?
 
 
  So Jason, are you still convinced that the random neurons would not be
  conscious? If you are, you are putting the cart before the horse. The
 fading
  qualia argument makes the case that any process preserving function
 also
  preserves consciousness. Any process; that computations are one such
 process
  is fortuitous.
 
 
  But the random neurons does not preserve function, nor do the movie.
  OK?
 
 
  I don't see why you're so sure about this. Function is preserved while
 the
  randomness corresponds to normal activity, then it all falls apart. If
 by
  some miracle it continued then the random brain is as good as a normal
  brain, and I'd say yes to the doctor offering me such a brain. If you
  don't think that counts as computation, OK - but it would still be
  conscious.
 
 
 
  My third-person function would indeed be preserved by such a Miracle
  Brain, but I would strongly doubt it would preserve my first-person.
 Why do
  you think that the random firing neurons preserve consciousness? Do you
  think they would still preserve consciousness if they became physically
  separated from each other yet maintained the same firing patterns?

 I think the random neurons would preserve consciousness because
 otherwise you could make a 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-20 Thread meekerdb

On 5/19/2015 10:44 AM, Bruno Marchal wrote:
The tin-can Turing machine is unbounded in its potential computational complexity, 
there's no reason to be a bio- or silico-chauvinist against it. However, by definition, 
a lookup table has near zero computational complexity, no retained state.


But it is counterfactually correct on a large range spectrum. Of course, it has to be 
infinite to be genuinely counterfactual-correct.


When programs are written that determine human safety (e.g. aircraft flight controls) they 
are written languages that can be proven correct - in the sense of correctly implementing 
a specification and not being able to prove a contradiction.  Is there some theorem that 
proves such provably correct programs cannot be intelligent?


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-20 Thread Terren Suydam
Mostly I agree with the absurdity of partial zombies... but couldn't you
say I'm a partial zombie when I'm driving, get lost in thought, and realize
minutes later that I had no awareness or memory of anything I drive past,
yet somehow navigated curves and  other cars?

Terren
On May 20, 2015 7:53 PM, Stathis Papaioannou stath...@gmail.com wrote:



 On Wednesday, May 20, 2015, Jason Resch jasonre...@gmail.com wrote:



 On Tue, May 19, 2015 at 12:54 AM, Stathis Papaioannou stath...@gmail.com
  wrote:

 On 19 May 2015 at 11:05, Jason Resch jasonre...@gmail.com wrote:
 
 
  On Mon, May 18, 2015 at 10:05 AM, Stathis Papaioannou 
 stath...@gmail.com
  wrote:
 
 
 
  On Tuesday, May 19, 2015, Bruno Marchal marc...@ulb.ac.be wrote:
 
 
  On 16 May 2015, at 07:10, Stathis Papaioannou wrote:
 
 
 
 
 
  On 13 May 2015, at 11:59 am, Jason Resch jasonre...@gmail.com
 wrote:
 
  Chalmer's fading quailia argument shows that if replacing a
 biological
  neuron with a functionally equivalent silicon neuron changed
 conscious
  perception, then it would lead to an absurdity, either:
  1. quaila fade/change as silicon neurons gradually replace the
 biological
  ones, leading to a case where the quaila are being completely out of
 touch
  with the functional state of the brain.
  or
  2. the replacement eventually leads to a sudden and complete loss of
 all
  quaila, but this suggests a single neuron, or even a few molecules
 of that
  neuron, when substituted, somehow completely determine the presence
 of
  quaila
 
  His argument is convincing, but what happens when we replace neurons
 not
  with functionally identical ones, but with neurons that fire
 according to a
  RNG. In all but 1 case, the random firings of the neurons will
 result in
  completely different behaviors, but what about that 1 (immensely
 rare) case
  where the random neuron firings (by chance) equal the firing
 patterns of the
  substituted neurons.
 
  In this case, behavior as observed from the outside is identical.
 Brain
  patterns and activity are similar, but according to computationalism
 the
  consciousness is different, or perhaps a zombie (if all neurons are
 replaced
  with random firing neurons). Presume that the activity of neurons in
 the
  visual cortex is required for visual quaila, and that all neurons in
 the
  visual cortex are replaced with random firing neurons, which by
 chance,
  mimic the behavior of neurons when viewing an apple.
 
  Is this not an example of fading quaila, or quaila desynchronized
 from
  the brain state? Would this person feel that they are blind, or lack
 visual
  quaila, all the while not being able to express their deficiency? I
 used to
  think when Searle argued this exact same thing would occur when
 substituted
  functionally identical biological neurons with artificial neurons
 that it
  was completely ridiculous, for there would be no room in the
 functionally
  equivalent brain to support thoughts such as help! I can't see, I am
  blind! for the information content in the brain is identical when
 the
  neurons are functionally identical.
 
  But then how does this reconcile with fading quaila as the result of
  substituting randomly firing neurons? The computations are not the
 same, so
  presumably the consciousness is not the same. But also, the
 information
  content does not support knowing/believing/expressing/thinking
 something is
  wrong. If anything, the information content of this random brain is
 much
  less, but it seems the result is something where the quaila is out
 of sync
  with the global state of the brain. Can anyone else where shed some
 clarity
  on what they think happens, and how to explain it in the rare case of
  luckily working randomly firing neurons, when only partial
 substitutions of
  the neurons in a brain is performed?
 
 
  So Jason, are you still convinced that the random neurons would not
 be
  conscious? If you are, you are putting the cart before the horse.
 The fading
  qualia argument makes the case that any process preserving function
 also
  preserves consciousness. Any process; that computations are one such
 process
  is fortuitous.
 
 
  But the random neurons does not preserve function, nor do the
 movie.
  OK?
 
 
  I don't see why you're so sure about this. Function is preserved
 while the
  randomness corresponds to normal activity, then it all falls apart.
 If by
  some miracle it continued then the random brain is as good as a normal
  brain, and I'd say yes to the doctor offering me such a brain. If
 you
  don't think that counts as computation, OK - but it would still be
  conscious.
 
 
 
  My third-person function would indeed be preserved by such a Miracle
  Brain, but I would strongly doubt it would preserve my first-person.
 Why do
  you think that the random firing neurons preserve consciousness? Do you
  think they would still preserve consciousness if they became physically
  separated from each other yet maintained the same firing 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-20 Thread Terren Suydam
That's the point though, I do notice a change, from not being conscious of
driving to becoming conscious of it again. And there's no denying the
difference between the two.

Terren
On May 20, 2015 11:26 PM, Stathis Papaioannou stath...@gmail.com wrote:



 On Thursday, May 21, 2015, Terren Suydam terren.suy...@gmail.com wrote:

 Mostly I agree with the absurdity of partial zombies... but couldn't you
 say I'm a partial zombie when I'm driving, get lost in thought, and realize
 minutes later that I had no awareness or memory of anything I drive past,
 yet somehow navigated curves and  other cars?

 There are cases like the one you describe and others, such as cortical
 blindness where the patient believes he can see, but the situation to
 consider is one where the subject is lucid and would normally notice a
 change in his experiences, in order to show that there is a difference
 between the two.


 --
 Stathis Papaioannou

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


  1   2   >