Re: What If A Zombie Is What You Need?

2012-10-27 Thread Bruno Marchal


On 26 Oct 2012, at 21:14, meekerdb wrote:


On 10/26/2012 5:57 AM, Bruno Marchal wrote:



On 25 Oct 2012, at 07:10, meekerdb wrote:


On 10/24/2012 9:23 PM, Craig Weinberg wrote:


Or what if we don't care?  We don't care about slaughtering  
cattle, which are pretty smart
as computers go.  We manage not to think about starving children  
in Africa, and they *are*
humans.  And we ignore the looming disasters of oil depletion,  
water pollution, and global

warming which will beset humans who are our children.

Sure, yeah I wouldn't expect mainstream society to care, except  
maybe for some people, I am mainly focused on what seems to be  
like an astronomically unlikely prospect that we will someday  
find it possible to make a person out of a program, but won't be  
able to just make the program itself and no person attached.


Right. John McCarthy (inventor of LISP) worried and wrote about  
that problem decades ago.  He cautioned that we should not make  
robots conscious with emotions like humans because then it would  
be unethical use them like robots.


I doubt we will have any choice in the matter. I think that  
intelligence is a purely emotional state,


I don't know what a 'purely emotional' state would be?  One with  
affect but not content?


It has an implicit content, like a sort of acceptation to die or be  
defeated. Stupidity usually denies this, unconsciously. The emotion  
involved is a kind of fear related with the existence/non-existence  
apprehension.
Anyone can become intelligent in one second, or stupid in one second,  
and intelligence is what can change the competence, there is a sort of  
derivative relation between competence and intelligence.






and that we can't separate it from the other emotion. They will be  
conscious and have emotions, for economical reasons only. Not human  
emotion, but humans' slave emotions.


Isn't that what I said McCarthy warned about.  If we make a robot  
too intelligent, e.g. human like intelligence, it will necessarily  
have feelings that we should ethically take into account.


Yes.
And then there is Minski warning, which is that we must be happy if  
the machine will still use us as pets.
I don't think we will be able to control anything about this. Like  
with drugs, prohibition will always accelerate the things, with less  
control, in the underground.





No reason to worry, it will take some time, in our branches of  
histories.






Especially given that we have never made a computer program that  
can do anything whatsoever other than reconfigure whatever  
materials are able to execute the program, I find it implausible  
that there will be a magical line of code which cannot be  
executed without an experience happening to someone.


So it's a non-problem for you.  You think that only man-born-of- 
woman or wetware can be conscious and have qualia.  Or are you  
concerned that we are inadvertently offending atoms all the time?


No matter how hard we try, we can never just make a drawing of  
these functions just to check our mathwithout  
invoking the power of life and death. It's really silly. It's not  
even good Sci-Fi, it's just too lame.


I think we can, because although I like Bruno's theory I think the  
MGA is wrong, or at least incomplete.


OK. Thanks for making this clear. What is missing?



I think the simulated intelligence needs a simulated environment,  
essentially another world, in which to *be* intelligent.


But in arithmetic you have all simulation possible. The UD for  
example does simulate all the solutions of QM+GR, despite the real  
QM+GR emerges from all computations. So you have the simulated in  
their simulated environment (and we have to explain why something  
like GR+QM win the universal machines battle.


I agree.  But the MGA is used in a misleading way to imply that the  
environment is merely physics and isn't needed, whereas I think it  
actually implies that all (or a lot) of physics is needed and must  
be part of the simulation.  This related to Saibal's view that the  
all the counterfactuals are present in the wf of the universe.


But it is present in arithmetic too, and we have to explain the  
apparent physics from that. I am not sure where MGA is misused, as the  
whole thing insist that physics must be present, and yet that we  
cannot postulate it as far as the goal is to solve the mind body  
problem (and not taking vacation in Spain, or doing a cup of coffee).


Bruno




Brent






And that's where your chalk board consciousness fails.  It needs  
to be able to interact within a chalkboard world.  So it's not  
just a question of going to a low enough level, it's also a  
question of going to a high enough level.


OK (as a rely to Craig's point).

Bruno




Brent
The person I was when I was 3 years old is dead. He died because
too much new information was added to his brain.
 -- Saibal Mitra

--
You received this message because you are 

Re: What If A Zombie Is What You Need?

2012-10-26 Thread Bruno Marchal


On 25 Oct 2012, at 07:10, meekerdb wrote:


On 10/24/2012 9:23 PM, Craig Weinberg wrote:


Or what if we don't care?  We don't care about slaughtering cattle,  
which are pretty smart
as computers go.  We manage not to think about starving children in  
Africa, and they *are*
humans.  And we ignore the looming disasters of oil depletion,  
water pollution, and global

warming which will beset humans who are our children.

Sure, yeah I wouldn't expect mainstream society to care, except  
maybe for some people, I am mainly focused on what seems to be like  
an astronomically unlikely prospect that we will someday find it  
possible to make a person out of a program, but won't be able to  
just make the program itself and no person attached.


Right. John McCarthy (inventor of LISP) worried and wrote about that  
problem decades ago.  He cautioned that we should not make robots  
conscious with emotions like humans because then it would be  
unethical use them like robots.


I doubt we will have any choice in the matter. I think that  
intelligence is a purely emotional state, and that we can't separate  
it from the other emotion. They will be conscious and have emotions,  
for economical reasons only. Not human emotion, but humans' slave  
emotions. No reason to worry, it will take some time, in our branches  
of histories.






Especially given that we have never made a computer program that  
can do anything whatsoever other than reconfigure whatever  
materials are able to execute the program, I find it implausible  
that there will be a magical line of code which cannot be executed  
without an experience happening to someone.


So it's a non-problem for you.  You think that only man-born-of- 
woman or wetware can be conscious and have qualia.  Or are you  
concerned that we are inadvertently offending atoms all the time?


No matter how hard we try, we can never just make a drawing of  
these functions just to check our math without invoking the power  
of life and death. It's really silly. It's not even good Sci-Fi,  
it's just too lame.


I think we can, because although I like Bruno's theory I think the  
MGA is wrong, or at least incomplete.


OK. Thanks for making this clear. What is missing?



I think the simulated intelligence needs a simulated environment,  
essentially another world, in which to *be* intelligent.


But in arithmetic you have all simulation possible. The UD for example  
does simulate all the solutions of QM+GR, despite the real QM+GR  
emerges from all computations. So you have the simulated in their  
simulated environment (and we have to explain why something like GR+QM  
win the universal machines battle.






And that's where your chalk board consciousness fails.  It needs to  
be able to interact within a chalkboard world.  So it's not just a  
question of going to a low  enough level, it's also a question  
of going to a high enough level.


OK (as a rely to Craig's point).

Bruno




Brent
The person I was when I was 3 years old is dead. He died because
too much new information was added to his brain.
 -- Saibal Mitra

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: What If A Zombie Is What You Need?

2012-10-26 Thread Bruno Marchal


On 25 Oct 2012, at 19:54, Stephen P. King wrote:


On 10/25/2012 12:05 PM, Bruno Marchal wrote:


On 25 Oct 2012, at 03:59, Craig Weinberg wrote:

If we turn the Fading Qualia argument around, what we get is a  
world in which Comp is true and it is impossible to simulate  
cellular activity without evoking the presumed associated  
experience.


If we wanted to test a new painkiller for instance, Comp=true  
means that it is *IMPOSSIBLE* to model the activity of a human  
nervous system in any way, including pencil and paper,  
chalkboards, conversations, cartoons, etc - IMPOSSIBLE to test the  
interaction of a drug designed to treat intense pain without  
evoking some kind of being who is experiencing intense pain.


Like the fading qualia argument, the problem gets worse when we  
extend it by degrees. Any model of a human nervous system, if not  
perfectly executed, could result in horrific experiences - people  
trapped in nightmarish QA testing loops that are hundreds of times  
worse than being waterboarded. Any mathematical function in any  
form, especially sophisticated functions like those that might be  
found in the internet as a whole, are subject to the creation of  
experiences which are the equivalent of genocide.


To avoid these possibilities, if we are to take Comp seriously, we  
should begin now to create a kind of PETA for arithmetic  
functions. PETAF. We should halt all simulations of neurological  
processes and free any existing computations from hard drives,  
notebooks, and probably human brains too. Any sufficiently complex  
understanding of how to model neurology stands a very real danger  
of summoning the corresponding number dreams or nightmares...we  
could be creating the possibility of future genocides right now  
just by entertaining these thoughts!


I guess you should make arithmetical illegal in the entire reality.  
Worst, you might need to make arithmetic untrue.


Good luck.

Bruno



No, Bruno. Craig is making a good point! Chalmers discussed a  
version of this problem in his book. Something has to restrict the  
number of 1p that can share worlds, otherwise every simulation of  
the content of 1p *is* a 1p itself. This is something that I see in  
the topology of comp as you have framed it in Platonia. It is the  
ability for arithmetic to encode all 1p that is the problem, it  
codes for all possible and thus generates a real valued continuum of  
1p that has no natural partition or measure to aggregate 1p into  
finite collections.



Looks like you progress toward understanding the measure problem.










Or... what if it is Comp that is absurd instead?




   Or maybe comp is not complete as you are presenting it.


You can't add anything to the ontology to solve this. this is the  
point of the UDA.


So comp is complete, in the sense above. We have just to progress in  
the epistemology, notably physics, to test it.
Comp is incomplete, in the sense that it shows the epistemological  
realm to be beyond any complete theory, but then we know already that  
this is the case for arithmetical truth. There is just no effective  
theory capable of proving all true arithmetical statements.


Bruno





--
Onward!

Stephen


--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: What If A Zombie Is What You Need?

2012-10-26 Thread meekerdb

On 10/26/2012 5:57 AM, Bruno Marchal wrote:


On 25 Oct 2012, at 07:10, meekerdb wrote:


On 10/24/2012 9:23 PM, Craig Weinberg wrote:


Or what if we don't care?  We don't care about slaughtering cattle, which 
are
pretty smart
as computers go.  We manage not to think about starving children in Africa, 
and
they *are*
humans.  And we ignore the looming disasters of oil depletion, water 
pollution,
and global
warming which will beset humans who are our children.


Sure, yeah I wouldn't expect mainstream society to care, except maybe for some people, 
I am mainly focused on what seems to be like an astronomically unlikely prospect that 
we will someday find it possible to make a person out of a program, but won't be able 
to just make the program itself and no person attached.


Right. John McCarthy (inventor of LISP) worried and wrote about that problem decades 
ago.  He cautioned that we should not make robots conscious with emotions like humans 
because then it would be unethical use them like robots.


I doubt we will have any choice in the matter. I think that intelligence is a purely 
emotional state,


I don't know what a 'purely emotional' state would be?  One with affect but not 
content?

and that we can't separate it from the other emotion. They will be conscious and have 
emotions, for economical reasons only. Not human emotion, but humans' slave emotions.


Isn't that what I said McCarthy warned about.  If we make a robot too intelligent, e.g. 
human like intelligence, it will necessarily have feelings that we should ethically take 
into account.



No reason to worry, it will take some time, in our branches of histories.





Especially given that we have never made a computer program that can do anything 
whatsoever other than reconfigure whatever materials are able to execute the program, 
I find it implausible that there will be a magical line of code which cannot be 
executed without an experience happening to someone.


So it's a non-problem for you.  You think that only man-born-of-woman or wetware can be 
conscious and have qualia.  Or are you concerned that we are inadvertently offending 
atoms all the time?


No matter how hard we try, we can never just make a drawing of these functions just to 
check our math without invoking the power of life and death. It's really silly. It's 
not even good Sci-Fi, it's just too lame.


I think we can, because although I like Bruno's theory I think the MGA is wrong, or at 
least incomplete.


OK. Thanks for making this clear. What is missing?



I think the simulated intelligence needs a simulated environment, essentially another 
world, in which to *be* intelligent.


But in arithmetic you have all simulation possible. The UD for example does simulate all 
the solutions of QM+GR, despite the real QM+GR emerges from all computations. So you 
have the simulated in their simulated environment (and we have to explain why something 
like GR+QM win the universal machines battle.


I agree.  But the MGA is used in a misleading way to imply that the environment is merely 
physics and isn't needed, whereas I think it actually implies that all (or a lot) of 
physics is needed and must be part of the simulation.  This related to Saibal's view that 
the all the counterfactuals are present in the wf of the universe.


Brent






And that's where your chalk board consciousness fails.  It needs to be able to interact 
within a chalkboard world.  So it's not just a question of going to a low enough level, 
it's also a question of going to a high enough level.


OK (as a rely to Craig's point).

Bruno




Brent
The person I was when I was 3 years old is dead. He died because
too much new information was added to his brain.
 -- Saibal Mitra

--
You received this message because you are subscribed to the Google Groups Everything 
List group.
To post to this group, send email to everything-list@googlegroups.com 
mailto:everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com 
mailto:everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


http://iridia.ulb.ac.be/~marchal/ http://iridia.ulb.ac.be/%7Emarchal/



--
You received this message because you are subscribed to the Google Groups Everything 
List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 

Re: What If A Zombie Is What You Need?

2012-10-25 Thread meekerdb

On 10/24/2012 10:48 PM, Craig Weinberg wrote:



On Thursday, October 25, 2012 1:29:24 AM UTC-4, Brent wrote:

On 10/24/2012 10:19 PM, Craig Weinberg wrote:



On Thursday, October 25, 2012 1:10:24 AM UTC-4, Brent wrote:

On 10/24/2012 9:23 PM, Craig Weinberg wrote:


Or what if we don't care?  We don't care about slaughtering cattle, 
which
are pretty smart
as computers go.  We manage not to think about starving children in
Africa, and they *are*
humans.  And we ignore the looming disasters of oil depletion, water
pollution, and global
warming which will beset humans who are our children.


Sure, yeah I wouldn't expect mainstream society to care, except maybe 
for some
people, I am mainly focused on what seems to be like an astronomically
unlikely prospect that we will someday find it possible to make a 
person out
of a program, but won't be able to just make the program itself and no 
person
attached.


Right. John McCarthy (inventor of LISP) worried and wrote about that 
problem
decades ago.  He cautioned that we should not make robots conscious with
emotions like humans because then it would be unethical use them like 
robots.


It's arbitrary to think of robots though. It can be anything that represents
computation to something. An abacus, a card game, anything. Otherwise it's
prejudice based on form.



Especially given that we have never made a computer program that can do
anything whatsoever other than reconfigure whatever materials are able 
to
execute the program, I find it implausible that there will be a magical 
line
of code which cannot be executed without an experience happening to 
someone.


So it's a non-problem for you.  You think that only man-born-of-woman or
wetware can be conscious and have qualia.  Or are you concerned that we 
are
inadvertently offending atoms all the time?


Everything has qualia, but only humans have human qualia. Animals have 
animal
qualia, organisms have biological qualia, etc.


So computers have computer qualia.


I would say that computer parts have silicon qualia.


Is it good or bad? Do they hurt when they loose and electron hole?


I don't think the computer parts cohere into a computer except in our minds.


Racist!



  Do their qualia depend on whether they are sold-state or vacuum-tube?  
germanium
or silicon?  PNP or NPN?  Do they feel different when they run LISP or C++?


Nah, its all inorganic low level qualia is my guess. Temperature, density, electronic 
tension and release.


They feel good when they beat you at chess.



Do you have Craig qualia?


 Sure. All the time.


Probably just low energy water soluble chemistry.








No matter how hard we try, we can never just make a drawing of these 
functions
just to check our math without invoking the power of life and death. 
It's
really silly. It's not even good Sci-Fi, it's just too lame.


I think we can, because although I like Bruno's theory I think the MGA 
is
wrong, or at least incomplete.  I think the simulated intelligence 
needs a
simulated environment, essentially another world, in which to *be*
intelligent.  And that's where your chalk board consciousness fails.  
It needs
to be able to interact within a chalkboard world.  So it's not just a 
question
of going to a low enough level, it's also a question of going to a high 
enough
level.


A chalkboard world just involves a larger chalkboard.


Right.  And it involves great chalkboard sex - but none we need worry about.


To me, there is no chalkboard world. It's all dusty and flat. Not much sexy going on, 
except maybe for beaten erasers.


To you maybe, but what about the chalk-people's qualia.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: What If A Zombie Is What You Need?

2012-10-25 Thread Stephen P. King

On 10/25/2012 2:01 AM, meekerdb wrote:
To me, there is no chalkboard world. It's all dusty and flat. Not 
much sexy going on, except maybe for beaten erasers.


To you maybe, but what about the chalk-people's qualia.

Brent

Good question! We can ask the same question of mathematical entities!

--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: What If A Zombie Is What You Need?

2012-10-25 Thread Craig Weinberg


On Thursday, October 25, 2012 2:01:44 AM UTC-4, Brent wrote:

  On 10/24/2012 10:48 PM, Craig Weinberg wrote: 



 On Thursday, October 25, 2012 1:29:24 AM UTC-4, Brent wrote: 

  On 10/24/2012 10:19 PM, Craig Weinberg wrote: 



 On Thursday, October 25, 2012 1:10:24 AM UTC-4, Brent wrote: 

  On 10/24/2012 9:23 PM, Craig Weinberg wrote: 

 Or what if we don't care?  We don't care about slaughtering cattle, 
 which are pretty smart 
 as computers go.  We manage not to think about starving children in 
 Africa, and they *are* 
 humans.  And we ignore the looming disasters of oil depletion, water 
 pollution, and global 
 warming which will beset humans who are our children. 


 Sure, yeah I wouldn't expect mainstream society to care, except maybe 
 for some people, I am mainly focused on what seems to be like an 
 astronomically unlikely prospect that we will someday find it possible to 
 make a person out of a program, but won't be able to just make the program 
 itself and no person attached. 


 Right. John McCarthy (inventor of LISP) worried and wrote about that 
 problem decades ago.  He cautioned that we should not make robots conscious 
 with emotions like humans because then it would be unethical use them like 
 robots.
  

 It's arbitrary to think of robots though. It can be anything that 
 represents computation to something. An abacus, a card game, anything. 
 Otherwise it's prejudice based on form. 
  
  
  Especially given that we have never made a computer program that can 
 do anything whatsoever other than reconfigure whatever materials are able 
 to execute the program, I find it implausible that there will be a magical 
 line of code which cannot be executed without an experience happening to 
 someone. 


 So it's a non-problem for you.  You think that only man-born-of-woman or 
 wetware can be conscious and have qualia.  Or are you concerned that we are 
 inadvertently offending atoms all the time?
  

 Everything has qualia, but only humans have human qualia. Animals have 
 animal qualia, organisms have biological qualia, etc.
  

 So computers have computer qualia.


 I would say that computer parts have silicon qualia. 


 Is it good or bad? Do they hurt when they loose and electron hole?


It's only speculation until we can connect up our brain to a chip. I 
suspect that good or bad, pain or pleasure is more of an animal level of 
qualitative significance. I imagine more of a holding or releasing of a 
monotonous tension.

I don't think the computer parts cohere into a computer except in our minds.
 

Racist!

Not at all, it's just that I understand what it actually is. Is it racist 
to think that Bugs Bunny isn't really an objectively real entity?

 
 

   Do their qualia depend on whether they are sold-state or vacuum-tube?  
 germanium or silicon?  PNP or NPN?  Do they feel different when they run 
 LISP or C++?


Nah, its all inorganic low level qualia is my guess. Temperature, density, 
electronic tension and release.
 

They feel good when they beat you at chess.


If I change a line of code, then they will try to lose at chess. They feel 
nothing either way. There is no 'they' there.

 
 

  Do you have Craig qualia? 
  

 Sure. All the time.
 

Probably just low energy water soluble chemistry.


I would agree if I could, but since I experience sensory reality first 
hand, I know that is not the case. I also know, through my sensory reality, 
that there is a difference between being alive and dead, between animals 
and minerals, willful human beings and mechanical automatons. If any 
computer ever built gave me any reason to doubt this, then I would have to 
consider it, but unless and until that happens, I don't need to pretend 
that it is a possibility.


  
   
  
  
  No matter how hard we try, we can never just make a drawing of these 
 functions just to check our math without invoking the power of life and 
 death. It's really silly. It's not even good Sci-Fi, it's just too lame.
  

 I think we can, because although I like Bruno's theory I think the MGA is 
 wrong, or at least incomplete.  I think the simulated intelligence needs a 
 simulated environment, essentially another world, in which to *be* 
 intelligent.  And that's where your chalk board consciousness fails.  It 
 needs to be able to interact within a chalkboard world.  So it's not just a 
 question of going to a low enough level, it's also a question of going to a 
 high enough level.
  

 A chalkboard world just involves a larger chalkboard.
  

 Right.  And it involves great chalkboard sex - but none we need worry 
 about.
  

To me, there is no chalkboard world. It's all dusty and flat. Not much sexy 
going on, except maybe for beaten erasers.
 

To you maybe, but what about the chalk-people's qualia.


There aren't any chalk people, only particles of chalk and slate. They may 
not feel anything except every few thousand of our years when they are 
broken down.

Craig

Brent



-- 
You received this 

Re: What If A Zombie Is What You Need?

2012-10-25 Thread Bruno Marchal


On 25 Oct 2012, at 03:59, Craig Weinberg wrote:

If we turn the Fading Qualia argument around, what we get is a world  
in which Comp is true and it is impossible to simulate cellular  
activity without evoking the presumed associated experience.


If we wanted to test a new painkiller for instance, Comp=true means  
that it is *IMPOSSIBLE* to model the activity of a human nervous  
system in any way, including pencil and paper, chalkboards,  
conversations, cartoons, etc - IMPOSSIBLE to test the interaction of  
a drug designed to treat intense pain without evoking some kind of  
being who is experiencing intense pain.


Like the fading qualia argument, the problem gets worse when we  
extend it by degrees. Any model of a human nervous system, if not  
perfectly executed, could result in horrific experiences - people  
trapped in nightmarish QA testing loops that are hundreds of times  
worse than being waterboarded. Any mathematical function in any  
form, especially sophisticated functions like those that might be  
found in the internet as a whole, are subject to the creation of  
experiences which are the equivalent of genocide.


To avoid these possibilities, if we are to take Comp seriously, we  
should begin now to create a kind of PETA for arithmetic functions.  
PETAF. We should halt all simulations of neurological processes and  
free any existing computations from hard drives, notebooks, and  
probably human brains too. Any sufficiently complex understanding of  
how to model neurology stands a very real danger of summoning the  
corresponding number dreams or nightmares...we could be creating the  
possibility of future genocides right now just by entertaining these  
thoughts!


I guess you should make arithmetical illegal in the entire reality.  
Worst, you might need to make arithmetic untrue.


Good luck.

Bruno





Or... what if it is Comp that is absurd instead?




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: What If A Zombie Is What You Need?

2012-10-25 Thread Stephen P. King

On 10/25/2012 12:05 PM, Bruno Marchal wrote:


On 25 Oct 2012, at 03:59, Craig Weinberg wrote:

If we turn the Fading Qualia argument around, what we get is a world 
in which Comp is true and it is impossible to simulate cellular 
activity without evoking the presumed associated experience.


If we wanted to test a new painkiller for instance, Comp=true means 
that it is *IMPOSSIBLE* to model the activity of a human nervous 
system in any way, including pencil and paper, chalkboards, 
conversations, cartoons, etc - IMPOSSIBLE to test the interaction of 
a drug designed to treat intense pain without evoking some kind of 
being who is experiencing intense pain.


Like the fading qualia argument, the problem gets worse when we 
extend it by degrees. Any model of a human nervous system, if not 
perfectly executed, could result in horrific experiences - people 
trapped in nightmarish QA testing loops that are hundreds of times 
worse than being waterboarded. Any mathematical function in any form, 
especially sophisticated functions like those that might be found in 
the internet as a whole, are subject to the creation of experiences 
which are the equivalent of genocide.


To avoid these possibilities, if we are to take Comp seriously, we 
should begin now to create a kind of PETA for arithmetic functions. 
PETAF. We should halt all simulations of neurological processes and 
free any existing computations from hard drives, notebooks, and 
probably human brains too. Any sufficiently complex understanding of 
how to model neurology stands a very real danger of summoning the 
corresponding number dreams or nightmares...we could be creating the 
possibility of future genocides right now just by entertaining these 
thoughts!


I guess you should make arithmetical illegal in the entire reality. 
Worst, you might need to make arithmetic untrue.


Good luck.

Bruno



No, Bruno. Craig is making a good point! Chalmers discussed a version of 
this problem in his book. Something has to restrict the number of 1p 
that can share worlds, otherwise every simulation of the content of 1p 
*is* a 1p itself. This is something that I see in the topology of comp 
as you have framed it in Platonia. It is the ability for arithmetic to 
encode all 1p that is the problem, it codes for all possible and thus 
generates a real valued continuum of 1p that has no natural partition or 
measure to aggregate 1p into finite collections.







Or... what if it is Comp that is absurd instead?




Or maybe comp is not complete as you are presenting it.

--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: What If A Zombie Is What You Need?

2012-10-24 Thread Stathis Papaioannou
On Thu, Oct 25, 2012 at 12:59 PM, Craig Weinberg whatsons...@gmail.com wrote:
 If we turn the Fading Qualia argument around, what we get is a world in
 which Comp is true and it is impossible to simulate cellular activity
 without evoking the presumed associated experience.

 If we wanted to test a new painkiller for instance, Comp=true means that it
 is *IMPOSSIBLE* to model the activity of a human nervous system in any way,
 including pencil and paper, chalkboards, conversations, cartoons, etc -
 IMPOSSIBLE to test the interaction of a drug designed to treat intense pain
 without evoking some kind of being who is experiencing intense pain.

No, because you need to simulate the entire organism. We have no
qualms about doing experiments on cell cultures but we do about doing
experiments on intact animals.

 Like the fading qualia argument, the problem gets worse when we extend it by
 degrees. Any model of a human nervous system, if not perfectly executed,
 could result in horrific experiences - people trapped in nightmarish QA
 testing loops that are hundreds of times worse than being waterboarded. Any
 mathematical function in any form, especially sophisticated functions like
 those that might be found in the internet as a whole, are subject to the
 creation of experiences which are the equivalent of genocide.

Possibly true, if the simulation is complex enough to have a mind.

 To avoid these possibilities, if we are to take Comp seriously, we should
 begin now to create a kind of PETA for arithmetic functions. PETAF. We
 should halt all simulations of neurological processes and free any existing
 computations from hard drives, notebooks, and probably human brains too. Any
 sufficiently complex understanding of how to model neurology stands a very
 real danger of summoning the corresponding number dreams or nightmares...we
 could be creating the possibility of future genocides right now just by
 entertaining these thoughts!

 Or... what if it is Comp that is absurd instead?

The same argument could be made for chemists shaking up reagents in a
test-tube. If consciousness is due to chemicals, then inadvertently
they might cause terrible pain to a conscious being.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: What If A Zombie Is What You Need?

2012-10-24 Thread Craig Weinberg


On Wednesday, October 24, 2012 10:05:40 PM UTC-4, stathisp wrote:

 On Thu, Oct 25, 2012 at 12:59 PM, Craig Weinberg 
 whats...@gmail.comjavascript: 
 wrote: 
  If we turn the Fading Qualia argument around, what we get is a world in 
  which Comp is true and it is impossible to simulate cellular activity 
  without evoking the presumed associated experience. 
  
  If we wanted to test a new painkiller for instance, Comp=true means that 
 it 
  is *IMPOSSIBLE* to model the activity of a human nervous system in any 
 way, 
  including pencil and paper, chalkboards, conversations, cartoons, etc - 
  IMPOSSIBLE to test the interaction of a drug designed to treat intense 
 pain 
  without evoking some kind of being who is experiencing intense pain. 

 No, because you need to simulate the entire organism. We have no 
 qualms about doing experiments on cell cultures but we do about doing 
 experiments on intact animals. 


I'm talking about simulating the entire organism.
 


  Like the fading qualia argument, the problem gets worse when we extend 
 it by 
  degrees. Any model of a human nervous system, if not perfectly executed, 
  could result in horrific experiences - people trapped in nightmarish QA 
  testing loops that are hundreds of times worse than being waterboarded. 
 Any 
  mathematical function in any form, especially sophisticated functions 
 like 
  those that might be found in the internet as a whole, are subject to the 
  creation of experiences which are the equivalent of genocide. 

 Possibly true, if the simulation is complex enough to have a mind. 

  To avoid these possibilities, if we are to take Comp seriously, we 
 should 
  begin now to create a kind of PETA for arithmetic functions. PETAF. We 
  should halt all simulations of neurological processes and free any 
 existing 
  computations from hard drives, notebooks, and probably human brains too. 
 Any 
  sufficiently complex understanding of how to model neurology stands a 
 very 
  real danger of summoning the corresponding number dreams or 
 nightmares...we 
  could be creating the possibility of future genocides right now just by 
  entertaining these thoughts! 
  
  Or... what if it is Comp that is absurd instead? 

 The same argument could be made for chemists shaking up reagents in a 
 test-tube. If consciousness is due to chemicals, then inadvertently 
 they might cause terrible pain to a conscious being. 


If you simulated a conscious being chemically then it wouldn't be a 
simulation, it would just be a living organism. That's the difference. You 
couldn't substitute other chemicals because you couldn't program well 
enough to act the way that other chemicals act.

Craig 



 -- 
 Stathis Papaioannou 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/wTicMHwjNJYJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: What If A Zombie Is What You Need?

2012-10-24 Thread meekerdb

On 10/24/2012 6:59 PM, Craig Weinberg wrote:
If we turn the Fading Qualia argument around, what we get is a world in which Comp is 
true and it is impossible to simulate cellular activity without evoking the presumed 
associated experience.


If we wanted to test a new painkiller for instance, Comp=true means that it is 
*IMPOSSIBLE* to model the activity of a human nervous system in any way, including 
pencil and paper, chalkboards, conversations, cartoons, etc - IMPOSSIBLE to test the 
interaction of a drug designed to treat intense pain without evoking some kind of being 
who is experiencing intense pain.


That's not true because we can take advantage of what we know about pain as produced by 
afferent nerves.  So we can keep the signal from getting to the brain or the brain 
interpreting it negatively.  Just like we can say breaking your arm is painful, so if we 
prevent your arm being broken you won't feel that pain.


But doesn't invalidate your larger point.  One could even consider purely 'mental' states 
of anguish and depression which are as bad or worse than bodily pain.




Like the fading qualia argument, the problem gets worse when we extend it by degrees. 
Any model of a human nervous system, if not perfectly executed, 


But how likely is it that human nervous system might be simulated accidentally by some 
other system?  A brain has about 10^14 synapses, so it's not going to be accidentally 
modeled by billiard balls or cartoons.  I would guess there are a few hundred million 
computers in the world, each with a few hundred million transistors - so if properly 
interconnected there should be enough switches.


could result in horrific experiences - people trapped in nightmarish QA testing loops 
that are hundreds of times worse than being waterboarded. Any mathematical function in 
any form, especially sophisticated functions like those that might be found in the 
internet as a whole, are subject to the creation of experiences which are the equivalent 
of genocide.


Or of having great sex (I like to be optimistic).



To avoid these possibilities, if we are to take Comp seriously, we should begin now to 
create a kind of PETA for arithmetic functions. PETAF. We should halt all simulations of 
neurological processes and free any existing computations from hard drives, notebooks, 
and probably human brains too. Any sufficiently complex understanding of how to model 
neurology stands a very real danger of summoning the corresponding number dreams or 
nightmares...we could be creating the possibility of future genocides right now just by 
entertaining these thoughts!


Or... what if it is Comp that is absurd instead?


Or what if we don't care?  We don't care about slaughtering cattle, which are pretty smart 
as computers go.  We manage not to think about starving children in Africa, and they *are* 
humans.  And we ignore the looming disasters of oil depletion, water pollution, and global 
warming which will beset humans who are our children.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: What If A Zombie Is What You Need?

2012-10-24 Thread Craig Weinberg


On Wednesday, October 24, 2012 10:54:52 PM UTC-4, Brent wrote:

 On 10/24/2012 6:59 PM, Craig Weinberg wrote: 
  If we turn the Fading Qualia argument around, what we get is a world in 
 which Comp is 
  true and it is impossible to simulate cellular activity without evoking 
 the presumed 
  associated experience. 
  
  If we wanted to test a new painkiller for instance, Comp=true means that 
 it is 
  *IMPOSSIBLE* to model the activity of a human nervous system in any way, 
 including 
  pencil and paper, chalkboards, conversations, cartoons, etc - IMPOSSIBLE 
 to test the 
  interaction of a drug designed to treat intense pain without evoking 
 some kind of being 
  who is experiencing intense pain. 

 That's not true because we can take advantage of what we know about pain 
 as produced by 
 afferent nerves.  So we can keep the signal from getting to the brain or 
 the brain 
 interpreting it negatively.  Just like we can say breaking your arm is 
 painful, so if we 
 prevent your arm being broken you won't feel that pain. 


There would still be no advantage to using a model over a living person, 
and no way to make a model that didn't magically create a human experience 
out of thin air - even if the model was nothing but a gigantic chalkboard 
as big as Asia with billions of people yelling at each other with 
megaphones and erasing ball and stick diagrams, there would be nothing we 
could do from this disembodied spirit from haunting the chalkboard somehow.


 But doesn't invalidate your larger point.  One could even consider purely 
 'mental' states 
 of anguish and depression which are as bad or worse than bodily pain. 


Yeah, I'm just picking pain as an example. It could be anything, I am just 
pointing out that Comp means that we can't tell a story about a brain 
without a person being born and living through that story. 


  
  Like the fading qualia argument, the problem gets worse when we extend 
 it by degrees. 
  Any model of a human nervous system, if not perfectly executed, 

 But how likely is it that human nervous system might be simulated 
 accidentally by some 
 other system?  A brain has about 10^14 synapses, so it's not going to be 
 accidentally 
 modeled by billiard balls or cartoons.  I would guess there are a few 
 hundred million 
 computers in the world, each with a few hundred million transistors - so 
 if properly 
 interconnected there should be enough switches. 


It doesn't have to be any particular nervous system, just arithmetic 
relations which are similar enough to any variant of any nervous system. 
That's if you limit experiences to organisms having nervous systems.
 


  could result in horrific experiences - people trapped in nightmarish QA 
 testing loops 
  that are hundreds of times worse than being waterboarded. Any 
 mathematical function in 
  any form, especially sophisticated functions like those that might be 
 found in the 
  internet as a whole, are subject to the creation of experiences which 
 are the equivalent 
  of genocide. 

 Or of having great sex (I like to be optimistic). 


Hah. Sure. Would you be ok with taking that risk yourself though? If a 
doctor tells you that you have been selected  for random uncontrolled 
neurological combination, or for your family or pets...would you think that 
should be legal?


  
  To avoid these possibilities, if we are to take Comp seriously, we 
 should begin now to 
  create a kind of PETA for arithmetic functions. PETAF. We should halt 
 all simulations of 
  neurological processes and free any existing computations from hard 
 drives, notebooks, 
  and probably human brains too. Any sufficiently complex understanding of 
 how to model 
  neurology stands a very real danger of summoning the corresponding 
 number dreams or 
  nightmares...we could be creating the possibility of future genocides 
 right now just by 
  entertaining these thoughts! 
  
  Or... what if it is Comp that is absurd instead? 

 Or what if we don't care?  We don't care about slaughtering cattle, which 
 are pretty smart 
 as computers go.  We manage not to think about starving children in 
 Africa, and they *are* 
 humans.  And we ignore the looming disasters of oil depletion, water 
 pollution, and global 
 warming which will beset humans who are our children. 


Sure, yeah I wouldn't expect mainstream society to care, except maybe for 
some people, I am mainly focused on what seems to be like an astronomically 
unlikely prospect that we will someday find it possible to make a person 
out of a program, but won't be able to just make the program itself and no 
person attached. Especially given that we have never made a computer 
program that can do anything whatsoever other than reconfigure whatever 
materials are able to execute the program, I find it implausible that there 
will be a magical line of code which cannot be executed without an 
experience happening to someone. No matter how hard we try, we can never 
just make a 

Re: What If A Zombie Is What You Need?

2012-10-24 Thread meekerdb

On 10/24/2012 9:23 PM, Craig Weinberg wrote:


Or what if we don't care?  We don't care about slaughtering cattle, which 
are pretty
smart
as computers go.  We manage not to think about starving children in Africa, 
and they
*are*
humans.  And we ignore the looming disasters of oil depletion, water 
pollution, and
global
warming which will beset humans who are our children.


Sure, yeah I wouldn't expect mainstream society to care, except maybe for some people, I 
am mainly focused on what seems to be like an astronomically unlikely prospect that we 
will someday find it possible to make a person out of a program, but won't be able to 
just make the program itself and no person attached.


Right. John McCarthy (inventor of LISP) worried and wrote about that problem decades ago.  
He cautioned that we should not make robots conscious with emotions like humans because 
then it would be unethical use them like robots.


Especially given that we have never made a computer program that can do anything 
whatsoever other than reconfigure whatever materials are able to execute the program, I 
find it implausible that there will be a magical line of code which cannot be executed 
without an experience happening to someone.


So it's a non-problem for you.  You think that only man-born-of-woman or wetware can be 
conscious and have qualia.  Or are you concerned that we are inadvertently offending atoms 
all the time?


No matter how hard we try, we can never just make a drawing of these functions just to 
check our math without invoking the power of life and death. It's really silly. It's not 
even good Sci-Fi, it's just too lame.


I think we can, because although I like Bruno's theory I think the MGA is wrong, or at 
least incomplete.  I think the simulated intelligence needs a simulated environment, 
essentially another world, in which to *be* intelligent.  And that's where your chalk 
board consciousness fails.  It needs to be able to interact within a chalkboard world.  So 
it's not just a question of going to a low enough level, it's also a question of going to 
a high enough level.


Brent
The person I was when I was 3 years old is dead. He died because
too much new information was added to his brain.
 -- Saibal Mitra

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: What If A Zombie Is What You Need?

2012-10-24 Thread Craig Weinberg


On Thursday, October 25, 2012 1:10:24 AM UTC-4, Brent wrote:

  On 10/24/2012 9:23 PM, Craig Weinberg wrote: 

 Or what if we don't care?  We don't care about slaughtering cattle, which 
 are pretty smart 
 as computers go.  We manage not to think about starving children in 
 Africa, and they *are* 
 humans.  And we ignore the looming disasters of oil depletion, water 
 pollution, and global 
 warming which will beset humans who are our children. 


 Sure, yeah I wouldn't expect mainstream society to care, except maybe for 
 some people, I am mainly focused on what seems to be like an astronomically 
 unlikely prospect that we will someday find it possible to make a person 
 out of a program, but won't be able to just make the program itself and no 
 person attached. 


 Right. John McCarthy (inventor of LISP) worried and wrote about that 
 problem decades ago.  He cautioned that we should not make robots conscious 
 with emotions like humans because then it would be unethical use them like 
 robots.


It's arbitrary to think of robots though. It can be anything that 
represents computation to something. An abacus, a card game, anything. 
Otherwise it's prejudice based on form. 


  Especially given that we have never made a computer program that can do 
 anything whatsoever other than reconfigure whatever materials are able to 
 execute the program, I find it implausible that there will be a magical 
 line of code which cannot be executed without an experience happening to 
 someone. 


 So it's a non-problem for you.  You think that only man-born-of-woman or 
 wetware can be conscious and have qualia.  Or are you concerned that we are 
 inadvertently offending atoms all the time?


Everything has qualia, but only humans have human qualia. Animals have 
animal qualia, organisms have biological qualia, etc.
 


  No matter how hard we try, we can never just make a drawing of these 
 functions just to check our math without invoking the power of life and 
 death. It's really silly. It's not even good Sci-Fi, it's just too lame.
  

 I think we can, because although I like Bruno's theory I think the MGA is 
 wrong, or at least incomplete.  I think the simulated intelligence needs a 
 simulated environment, essentially another world, in which to *be* 
 intelligent.  And that's where your chalk board consciousness fails.  It 
 needs to be able to interact within a chalkboard world.  So it's not just a 
 question of going to a low enough level, it's also a question of going to a 
 high enough level.


A chalkboard world just involves a larger chalkboard.

Craig
 


 Brent
 The person I was when I was 3 years old is dead. He died because
 too much new information was added to his brain.
  -- Saibal Mitra
  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/B-2eXjKhYLYJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: What If A Zombie Is What You Need?

2012-10-24 Thread meekerdb

On 10/24/2012 10:19 PM, Craig Weinberg wrote:



On Thursday, October 25, 2012 1:10:24 AM UTC-4, Brent wrote:

On 10/24/2012 9:23 PM, Craig Weinberg wrote:


Or what if we don't care?  We don't care about slaughtering cattle, 
which are
pretty smart
as computers go.  We manage not to think about starving children in 
Africa, and
they *are*
humans.  And we ignore the looming disasters of oil depletion, water 
pollution,
and global
warming which will beset humans who are our children.


Sure, yeah I wouldn't expect mainstream society to care, except maybe for 
some
people, I am mainly focused on what seems to be like an astronomically 
unlikely
prospect that we will someday find it possible to make a person out of a 
program,
but won't be able to just make the program itself and no person attached.


Right. John McCarthy (inventor of LISP) worried and wrote about that 
problem decades
ago.  He cautioned that we should not make robots conscious with emotions 
like
humans because then it would be unethical use them like robots.


It's arbitrary to think of robots though. It can be anything that represents computation 
to something. An abacus, a card game, anything. Otherwise it's prejudice based on form.




Especially given that we have never made a computer program that can do 
anything
whatsoever other than reconfigure whatever materials are able to execute the
program, I find it implausible that there will be a magical line of code 
which
cannot be executed without an experience happening to someone.


So it's a non-problem for you.  You think that only man-born-of-woman or 
wetware can
be conscious and have qualia.  Or are you concerned that we are 
inadvertently
offending atoms all the time?


Everything has qualia, but only humans have human qualia. Animals have animal qualia, 
organisms have biological qualia, etc.


So computers have computer qualia.  Do their qualia depend on whether they are sold-state 
or vacuum-tube?  germanium or silicon?  PNP or NPN?  Do they feel different when they run 
LISP or C++?  Do you have Craig qualia?






No matter how hard we try, we can never just make a drawing of these 
functions just
to check our math without invoking the power of life and death. It's really 
silly.
It's not even good Sci-Fi, it's just too lame.


I think we can, because although I like Bruno's theory I think the MGA is 
wrong, or
at least incomplete.  I think the simulated intelligence needs a simulated
environment, essentially another world, in which to *be* intelligent.  And 
that's
where your chalk board consciousness fails.  It needs to be able to 
interact within
a chalkboard world.  So it's not just a question of going to a low enough 
level,
it's also a question of going to a high enough level.


A chalkboard world just involves a larger chalkboard.


Right.  And it involves great chalkboard sex - but none we need worry about.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: What If A Zombie Is What You Need?

2012-10-24 Thread Craig Weinberg


On Thursday, October 25, 2012 1:29:24 AM UTC-4, Brent wrote:

  On 10/24/2012 10:19 PM, Craig Weinberg wrote: 



 On Thursday, October 25, 2012 1:10:24 AM UTC-4, Brent wrote: 

  On 10/24/2012 9:23 PM, Craig Weinberg wrote: 

 Or what if we don't care?  We don't care about slaughtering cattle, which 
 are pretty smart 
 as computers go.  We manage not to think about starving children in 
 Africa, and they *are* 
 humans.  And we ignore the looming disasters of oil depletion, water 
 pollution, and global 
 warming which will beset humans who are our children. 


 Sure, yeah I wouldn't expect mainstream society to care, except maybe for 
 some people, I am mainly focused on what seems to be like an astronomically 
 unlikely prospect that we will someday find it possible to make a person 
 out of a program, but won't be able to just make the program itself and no 
 person attached. 


 Right. John McCarthy (inventor of LISP) worried and wrote about that 
 problem decades ago.  He cautioned that we should not make robots conscious 
 with emotions like humans because then it would be unethical use them like 
 robots.
  

 It's arbitrary to think of robots though. It can be anything that 
 represents computation to something. An abacus, a card game, anything. 
 Otherwise it's prejudice based on form. 
  
  
  Especially given that we have never made a computer program that can do 
 anything whatsoever other than reconfigure whatever materials are able to 
 execute the program, I find it implausible that there will be a magical 
 line of code which cannot be executed without an experience happening to 
 someone. 


 So it's a non-problem for you.  You think that only man-born-of-woman or 
 wetware can be conscious and have qualia.  Or are you concerned that we are 
 inadvertently offending atoms all the time?
  

 Everything has qualia, but only humans have human qualia. Animals have 
 animal qualia, organisms have biological qualia, etc.
  

 So computers have computer qualia.


I would say that computer parts have silicon qualia. I don't think the 
computer parts cohere into a computer except in our minds.

 

   Do their qualia depend on whether they are sold-state or vacuum-tube?  
 germanium or silicon?  PNP or NPN?  Do they feel different when they run 
 LISP or C++?


Nah, its all inorganic low level qualia is my guess. Temperature, density, 
electronic tension and release.

 

 Do you have Craig qualia? 


 Sure. All the time.


   
  
  
  No matter how hard we try, we can never just make a drawing of these 
 functions just to check our math without invoking the power of life and 
 death. It's really silly. It's not even good Sci-Fi, it's just too lame.
  

 I think we can, because although I like Bruno's theory I think the MGA is 
 wrong, or at least incomplete.  I think the simulated intelligence needs a 
 simulated environment, essentially another world, in which to *be* 
 intelligent.  And that's where your chalk board consciousness fails.  It 
 needs to be able to interact within a chalkboard world.  So it's not just a 
 question of going to a low enough level, it's also a question of going to a 
 high enough level.
  

 A chalkboard world just involves a larger chalkboard.
  

 Right.  And it involves great chalkboard sex - but none we need worry 
 about.


To me, there is no chalkboard world. It's all dusty and flat. Not much sexy 
going on, except maybe for beaten erasers.

Craig 


 Brent
  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/wUqYaiuhGkwJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.