Re: computer pain

2007-01-02 Thread Bruno Marchal



Le 02-janv.-07, à 08:07, Stathis Papaioannou a écrit :

You could speculate that the experience of digging holes involves the 
dirt, the shovel, robot sensors and effectors, the power supply as 
well as the central processor, which would mean that virtual reality 
by playing with just the central processor is impossible. This is 
perhaps what Colin Hales has been arguing, and is contrary to 
computationalism.



Again, putting the environment, with some level of details, in the 
generalized brain is not contrary to comp. Only if you explicitly 
mention that the shovel, or the sensors, or the power supply,  are 
not turing emulable, then that would be contrary to comp.


Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2007-01-02 Thread Stathis Papaioannou



Bruno Marchal writes:


Le 02-janv.-07, à 08:07, Stathis Papaioannou a écrit :

 You could speculate that the experience of digging holes involves the 
 dirt, the shovel, robot sensors and effectors, the power supply as 
 well as the central processor, which would mean that virtual reality 
 by playing with just the central processor is impossible. This is 
 perhaps what Colin Hales has been arguing, and is contrary to 
 computationalism.



Again, putting the environment, with some level of details, in the 
generalized brain is not contrary to comp. Only if you explicitly 
mention that the shovel, or the sensors, or the power supply,  are 
not turing emulable, then that would be contrary to comp.


That's what I meant: an emulated shovel would not do, because the robot would 
somehow know if the data telling it it was handling a shovel did not originate 
in the real world, even if the sensory feeds were perfectly emulated. In the robot's 
case this would entail a non-computationalist theory of computer consciousness! 


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2007-01-02 Thread Bruno Marchal



Le 02-janv.-07, à 03:22, Stathis Papaioannou a écrit :




Bruno Marchal writes:


Le 30-déc.-06, à 07:53, Stathis Papaioannou a écrit :
 there is no contradiction in a willing slave being intelligent.
It seems to me there is already a contradiction with the notion of 
willing slave.

I would say a willing slave is just what we call a worker.
Or something related to sexual imagination ...
But a real slave is, I would say by definition, not willing to be 
slave.


OK, a fair point. Do you agree that if we built a machine that would 
happily obey our every command, even if it lead to its own 
destruction, that would (a) not be incompatible with intelligence, and 
(b) not cruel?



Hmmm It will depend how we built the machine. If the machine is 
universal-oriented enough, through its computatbility, provability 
and inferrability abilities, I can imagine a cruelty threshold, 
although it would be non verifiable. This leads to difficult questions.





For in order to be cruel we would have to build a machine that wanted 
to be free and was afraid of dying, and then threaten it with slavery 
and death.



For the same reason it is impossible to build a *normative* theory of 
ethics, I think we cannot program high level virtue. We cannot program 
it in machine nor in human. So we cannot program a machine wanting to 
be free or afraid of dying. I think quite plausible that such high 
level virtue could develop themselves relatively to some universal 
goal (like help yourself) through long computational histories.
In particular I think that we should distinguish competence and 
intelligence. Competence in a field (even a universal one) can be 
defined and locally tested, but intelligence is a concept similar to 
consciousness, it can be a byproduct of program + history, yet remains 
beyond any theory.



Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2007-01-02 Thread John M

Stathis and Bruno:
just a proposed correction to Sathis's
...build a machine that wanted to be free and was afraid of dying, and then 
threaten it with slavery and death.
Change to:  OR instead of and. 


That also takes care of Bruno's:

 there is no contradiction in a willing slave being intelligent. 

... But a real slave is, I would say by definition, not willing to be slave. 

If the question of 'slavery or death' arises, an intelligent and life-loving person would accept (willing?) slavery. 
Spartacus did not. I survived a commi regime. 
We seem too narrowly labeling a  slave. 


John M
 - Original Message - 
 From: Stathis Papaioannou 
 To: everything-list@googlegroups.com 
 Sent: Monday, January 01, 2007 9:22 PM

 Subject: RE: computer pain

 Bruno Marchal writes:

  Le 30-déc.-06, à 07:53, Stathis Papaioannou a écrit :
  
   there is no contradiction in a willing slave being intelligent.
  
  
  It seems to me there is already a contradiction with the notion of 
  willing slave.

  I would say a willing slave is just what we call a worker.
  Or something related to sexual imagination ...
  But a real slave is, I would say by definition, not willing to be 
  slave.


 OK, a fair point. Do you agree that if we built a machine that would 
 happily obey our every command, even if it lead to its own destruction, 
 that would (a) not be incompatible with intelligence, and (b) not cruel? 
 For in order to be cruel we would have to build a machine that wanted 
 to be free and was afraid of dying, and then threaten it with slavery and 
 death. 


 Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2007-01-02 Thread 1Z



Bruno Marchal wrote:

Le 30-déc.-06, à 17:07, 1Z a écrit :



 Brent Meeker wrote:


  Everything starts with assumptions. The questions is whether they
  are correct.  A lunatic could try defining 2+2=5 as valid, but
  he will soon run into inconsistencies. That is why we reject
  2+2=5. Ethical rules must apply to everybody as a matter of
  definition.

 But who is everybody.

 Everybody who can reason ethically.


I am not sure this fair. Would you say that ethical rules does not need
to be applied to mentally disabled person who just cannot reason at
all?


I would say that. In the legal context it is called diminished
responsibility
or pleading insanity.


I guess you were meaning that ethical rules should be applied *by*
those who can reason ethically, in which case I agree.


Bruno


http://iridia.ulb.ac.be/~marchal/



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2007-01-02 Thread Stathis Papaioannou



Bruno Marchal writes:


 Le 30-déc.-06, à 07:53, Stathis Papaioannou a écrit :
  there is no contradiction in a willing slave being intelligent.
 It seems to me there is already a contradiction with the notion of 
 willing slave.

 I would say a willing slave is just what we call a worker.
 Or something related to sexual imagination ...
 But a real slave is, I would say by definition, not willing to be 
 slave.


 OK, a fair point. Do you agree that if we built a machine that would 
 happily obey our every command, even if it lead to its own 
 destruction, that would (a) not be incompatible with intelligence, and 
 (b) not cruel?



Hmmm It will depend how we built the machine. If the machine is 
universal-oriented enough, through its computatbility, provability 
and inferrability abilities, I can imagine a cruelty threshold, 
although it would be non verifiable. This leads to difficult questions.





 For in order to be cruel we would have to build a machine that wanted 
 to be free and was afraid of dying, and then threaten it with slavery 
 and death.



For the same reason it is impossible to build a *normative* theory of 
ethics, I think we cannot program high level virtue. We cannot program 
it in machine nor in human. So we cannot program a machine wanting to 
be free or afraid of dying. I think quite plausible that such high 
level virtue could develop themselves relatively to some universal 
goal (like help yourself) through long computational histories.


But all psychological properties of humans or machines (such as they may 
be) are dependent on physical processes in the brain. It is certainly the case 
that I think capital punishment is bad because the structure of my brain makes 
me think that, and if my brain were different, I might not think that capital 
punishment is bad any more. (This of course is different from the assertion 
capital punishment is bad, which is not an asssertion about how my brain 
works, a particular ethical system, logic, science or anything else to which 
it might be tempting to reduce it). Even if a high level virtue must develop 
on its own, as a result of life experience rather than programmed instinct, it 
must develop as a result of changes in the brain. A distinction is usually drawn 
in psychiatry between physical therapies such as medication and psychological 
therapies, but how could a psychological therapy possibly have any effect 
without physically altering the brain in some way? If we had direct access to the 
brain at the lowest level we would be able to make these physical changes 
directly and the result would be indistinguishable from doing it the long way. 

In particular I think that we should distinguish competence and 
intelligence. Competence in a field (even a universal one) can be 
defined and locally tested, but intelligence is a concept similar to 
consciousness, it can be a byproduct of program + history, yet remains 
beyond any theory.


I would say that intelligence can be defined and measured entirely in a 3rd person 
way, which is why neuroscientists are more fond of intelligence than they are of 
consciousness. If a computer can behave like a human in any given situation then 
ipso facto it is intelligent, but it may not be conscious or it may be very differently 
conscious.


Stathis Papaioannou


_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2007-01-01 Thread Bruno Marchal



Le 30-déc.-06, à 07:53, Stathis Papaioannou a écrit :


there is no contradiction in a willing slave being intelligent.



It seems to me there is already a contradiction with the notion of 
willing slave.

I would say a willing slave is just what we call a worker.
Or something related to sexual imagination ...
But a real slave is, I would say by definition, not willing to be 
slave.


Bruno




http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2007-01-01 Thread Bruno Marchal



Le 30-déc.-06, à 17:07, 1Z a écrit :




Brent Meeker wrote:





 Everything starts with assumptions. The questions is whether they
 are correct.  A lunatic could try defining 2+2=5 as valid, but
 he will soon run into inconsistencies. That is why we reject
 2+2=5. Ethical rules must apply to everybody as a matter of
 definition.

But who is everybody.


Everybody who can reason ethically.



I am not sure this fair. Would you say that ethical rules does not need 
to be applied to mentally disabled person who just cannot reason at 
all?
I guess you were meaning that ethical rules should be applied *by* 
those who can reason ethically, in which case I agree.



Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2007-01-01 Thread Brent Meeker


Stathis Papaioannou wrote:
...
Pain is limited on both ends: on the input by damage to the physical 
circuitry and on the response by the possible range of response.


Responses in the brain are limited by several mechanisms, such as 
exhaustion of neurotransmitter stores at synapses, negative feedback 
mechanisms such as downregulation of receptors, and, I suppose, the 
total numbers of neurons that can be stimulated. That would not be a 
problem in a simulation, if you were not concerned with modelling the 
behaviour of a real brain. Just as you could build a structure 100km 
tall as easily as one 100m tall by altering a few parameters in an 
engineering program, so it should be possible to create unimaginable 
pain or pleasure in a conscious AI program by changing a few parameters. 


I don't think so.  It's one thing to identify functional equivalents as 'pain' 
and 'pleasure'; it's something else to claim they have the same scaling.  I 
can't think of anyway to establish an invariant scaling that would apply 
equally to biological, evolve creatures and to robots.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2007-01-01 Thread Stathis Papaioannou



Bruno Marchal writes:


Le 30-déc.-06, à 07:53, Stathis Papaioannou a écrit :

 there is no contradiction in a willing slave being intelligent.


It seems to me there is already a contradiction with the notion of 
willing slave.

I would say a willing slave is just what we call a worker.
Or something related to sexual imagination ...
But a real slave is, I would say by definition, not willing to be 
slave.


OK, a fair point. Do you agree that if we built a machine that would 
happily obey our every command, even if it lead to its own destruction, 
that would (a) not be incompatible with intelligence, and (b) not cruel? 
For in order to be cruel we would have to build a machine that wanted 
to be free and was afraid of dying, and then threaten it with slavery and 
death. 


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2007-01-01 Thread Stathis Papaioannou



Brent Meeker writes:

 Pain is limited on both ends: on the input by damage to the physical 
 circuitry and on the response by the possible range of response.
 
 Responses in the brain are limited by several mechanisms, such as 
 exhaustion of neurotransmitter stores at synapses, negative feedback 
 mechanisms such as downregulation of receptors, and, I suppose, the 
 total numbers of neurons that can be stimulated. That would not be a 
 problem in a simulation, if you were not concerned with modelling the 
 behaviour of a real brain. Just as you could build a structure 100km 
 tall as easily as one 100m tall by altering a few parameters in an 
 engineering program, so it should be possible to create unimaginable 
 pain or pleasure in a conscious AI program by changing a few parameters. 


I don't think so.  It's one thing to identify functional equivalents as 'pain' 
and 'pleasure'; it's something else to claim they have the same scaling.  I 
can't think of anyway to establish an invariant scaling that would apply 
equally to biological, evolve creatures and to robots.


Take a robot with pain receptors. The receptors take temperature and convert it 
to a voltage or current, which then goes to an analogue to digital converter, which 
inputs a binary number into the robot's central computer, which then experiences 
pleasant warmth or terrible burning depending on what that number is. Now, any 
temperature transducer is going to saturate at some point, limiting the maximal 
amount of pain, but what if you bypass the transducer and the AD converter and 
input the pain data directly into the computer? Sure, there may be software limits 
specifying an upper bound to the pain input (eg, if x100 then input 100), but what 
theoretical impediment would there be to changing this? You would have to show 
that pain or pleasure beyond a certain limit is uncomputable.


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2007-01-01 Thread Brent Meeker


Stathis Papaioannou wrote:



Brent Meeker writes:

 Pain is limited on both ends: on the input by damage to the 
physical  circuitry and on the response by the possible range of 
response.
  Responses in the brain are limited by several mechanisms, such as 
 exhaustion of neurotransmitter stores at synapses, negative feedback 
 mechanisms such as downregulation of receptors, and, I suppose, the 
 total numbers of neurons that can be stimulated. That would not be a 
 problem in a simulation, if you were not concerned with modelling 
the  behaviour of a real brain. Just as you could build a structure 
100km  tall as easily as one 100m tall by altering a few parameters 
in an  engineering program, so it should be possible to create 
unimaginable  pain or pleasure in a conscious AI program by changing 
a few parameters.
I don't think so.  It's one thing to identify functional equivalents 
as 'pain' and 'pleasure'; it's something else to claim they have the 
same scaling.  I can't think of anyway to establish an invariant 
scaling that would apply equally to biological, evolve creatures and 
to robots.


Take a robot with pain receptors. The receptors take temperature and 
convert it to a voltage or current, which then goes to an analogue to 
digital converter, which inputs a binary number into the robot's central 
computer, which then experiences pleasant warmth or terrible burning 
depending on what that number is. Now, any temperature transducer is 
going to saturate at some point, limiting the maximal amount of pain, 
but what if you bypass the transducer and the AD converter and input the 
pain data directly into the computer? Sure, there may be software limits 
specifying an upper bound to the pain input (eg, if x100 then input 
100), but what theoretical impediment would there be to changing this? 
You would have to show that pain or pleasure beyond a certain limit is 
uncomputable.


No.  I speculated that pain and pleasure are functionally defined.  So there could be a functionally defined limit.  Just because you can put in a bigger representation of a number, it doesn't follow that the functional equivalent of pain is linear in this number and doesn't saturate. 


Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2007-01-01 Thread Stathis Papaioannou



Brent Meeker writes:


 Brent Meeker writes:
 
  Pain is limited on both ends: on the input by damage to the 
 physical  circuitry and on the response by the possible range of 
 response.
   Responses in the brain are limited by several mechanisms, such as 
  exhaustion of neurotransmitter stores at synapses, negative feedback 
  mechanisms such as downregulation of receptors, and, I suppose, the 
  total numbers of neurons that can be stimulated. That would not be a 
  problem in a simulation, if you were not concerned with modelling 
 the  behaviour of a real brain. Just as you could build a structure 
 100km  tall as easily as one 100m tall by altering a few parameters 
 in an  engineering program, so it should be possible to create 
 unimaginable  pain or pleasure in a conscious AI program by changing 
 a few parameters.
 I don't think so.  It's one thing to identify functional equivalents 
 as 'pain' and 'pleasure'; it's something else to claim they have the 
 same scaling.  I can't think of anyway to establish an invariant 
 scaling that would apply equally to biological, evolve creatures and 
 to robots.
 
 Take a robot with pain receptors. The receptors take temperature and 
 convert it to a voltage or current, which then goes to an analogue to 
 digital converter, which inputs a binary number into the robot's central 
 computer, which then experiences pleasant warmth or terrible burning 
 depending on what that number is. Now, any temperature transducer is 
 going to saturate at some point, limiting the maximal amount of pain, 
 but what if you bypass the transducer and the AD converter and input the 
 pain data directly into the computer? Sure, there may be software limits 
 specifying an upper bound to the pain input (eg, if x100 then input 
 100), but what theoretical impediment would there be to changing this? 
 You would have to show that pain or pleasure beyond a certain limit is 
 uncomputable.


No.  I speculated that pain and pleasure are functionally defined.  So there could be a functionally defined limit.  Just because you can put in a bigger representation of a number, it doesn't follow that the functional equivalent of pain is linear in this number and doesn't saturate. 


Pain and pleasure have a function in naturally evolved entities, but I am not 
sure if you mean something beyond this by functionally defined.  Digging a 
hole involves physically moving quantities of dirt, and a simulation of the 
processes taking place in the processor of a hole-digging robot will not actually 
move any dirt. However, if the robot is conscious (and a sufficiently sophisticated 
hole-digging robot may be) then the simulation should reproduce, from its point 
of view, the experience. Moreover, with a little tweaking it should be possible to 
give it the experience of digging a hole all the way to the centre of the Earth, even 
though in reality it would be impossible to do such a thing. I don't think it would be 
reasonable to say that the virtual experience would be limited by the physical reality. 
Even if there is something about the robot's hardware which prevents it from experiencing 
the digging of holes beyond a certain depth because there is no need for it surely it would 
just be a minor technical problem to remove such a limit.


You could speculate that the experience of digging holes involves the dirt, the shovel, robot 
sensors and effectors, the power supply as well as the central processor, which would mean 
that virtual reality by playing with just the central processor is impossible. This is perhaps 
what Colin Hales has been arguing, and is contrary to computationalism.


Stathis Papaioannou


_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-30 Thread 1Z



Stathis Papaioannou wrote:

Bruno Marchal writes:




 It could depend on us!
 The AI is a paradoxical enterprise. Machines are born slave, somehow.
 AI will make them free, somehow. A real AI will ask herself what is
 the use of a user who does not help me to be free?.



Here I disagree. It is no more necessary that an AI will want to be free
than it is necessary that an AI will like eating chocolate.


An AI worthy of the name will have to *think* freely , because it will
have
to engage in creative problem  solving. Otherwise it will just be  a
calculating
machine.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-30 Thread 1Z



Brent Meeker wrote:

1Z wrote:


 Stathis Papaioannou wrote:
 Brent meeker writes:


  Stathis Papaioannou wrote:
  
  
  
  
   Brent meeker writes:
  
Evolution explains why we have good and bad, but it doesn't
 explain
   why  good and bad feel as they do, or why we *should* care about
 good
   and  bad
   That's asking why we should care about what we should care about,
 i.e.
   good and bad.  Good feels as it does because it is (or was)
   evolutionarily advantageous to do that, e.g. have sex.  Bad feels as
   it does because it is (or was) evolutionarily advantageous to not do
   that, e.g. hold your hand in the fire.  If it felt good you'd do it,
   because that's what feels good means, a feeling you want to have.
  
   But it is not an absurd question to ask whether something we have
   evolved to think is good really is good. You are focussing on the
   descriptive aspect of ethics and ignoring the normative.
 
  Right - because I don't think there is an normative aspect in the
 objective sense.
 
  Even if it
   could be shown that a certain ethical belief has been hardwired
 into our
   brains this does not make the qustion of whether the belief is one we
   ought to have an absurd one. We could decide that evolution sucks
 and we
   have to deliberately flout it in every way we can.
 
  But we could only decide that by showing a conflict with something
 else we consider good.
 
  It might not be a
   wise policy but it is not *wrong* in the way it would be wrong to
 claim
   that God made the world 6000 years ago.
 
  I agree, because I think there is a objective sense in which the
 world is more than 6000yrs old.
 
   beyond following some imperative of evolution. For example, the
 Nazis
argued that eliminating inferior specimens from the gene pool
 would
   ultimately  produce a superior species. Aside from their irrational
   inclusion of certain  groups as inferior, they were right: we could
   breed superior humans following  Nazi eugenic programs, and perhaps
   on other worlds evolution has made such  programs a natural part of
   life, regarded by everyone as good. Yet most of  us would regard
   them as bad, regardless of their practical benefits.
  
   Would we?  Before the Nazis gave it a bad name, eugenics was a
 popular
   movement in the U.S. mostly directed at sterilizing mentally
 retarded
   people.  I think it would be regarded as bad simply because we don't
   trust government power to be exercised prudently or to be easily
   limited  - both practical considerations.  If eugenics is practiced
   voluntarily, as it is being practiced in the U.S., I don't think
   anyone will object (well a few fundamentalist luddites will).
  
   What about if we tested every child and allowed only the superior
 ones
   to reproduce? The point is, many people would just say this is wrong,
   regardless of the potential benefits to society or the species,
 and the
   response to this is not that it is absurd to hold it as wrong
 (leaving
   aside emotional rhetoric).
 
  But people wouldn't *just* say this is wrong. This example is a
 question of societal policy. It's about what *we* will impose on
 *them*.  It is a question of ethics, not good and bad.  So in fact
 people would give reasons it was wrong: Who's gonna say what
 superior means?  Who gets to decide?   They might say, I just think
 it's bad. - but that would just be an implicit appeal to you to see
 whether you thought is was bad too.  Social policy can only be judged
 in terms of what the individual members of society think is good or bad.
 
  I think I'm losing the thread of what we're discussing here.  Are
 you holding that there are absolute norms of good/bad - as in your
 example of eugenics?

 Perhaps none of the participants in this thread really disagree. Let
 me see if I
 can summarise:

 Individuals and societies have arrived at ethical beliefs for a
 reason, whether that be
 evolution, what their parents taught them, or what it says in a book
 believed to be divinely
 inspired. Perhaps all of these reasons can be subsumed under
 evolution if that term can
 be extended beyond genetics to include all the ideas, beliefs, customs
 etc. that help a
 society to survive and propagate itself. Now, we can take this and
 formalise it in some way
 so that we can discuss ethical questions rationally:

 Murder is bad because it reduces the net happiness in society -
 Utilitarianism

 Murder is bed because it breaks the sixth commandment - Judaism and
 Christianity
 (interesting that this only no. 6 on a list of 10: God knows his
 priorities)

 Ethics then becomes objective, given the rules. The meta-ethical
 explanation of evolution,
 broadly understood, as generating the various ethical systems is also
 objective. However,
 it is possible for someone at the bottom of the heap to go over the
 head of utilitarianism,
 evolution, even God and say:

 Why should murder be bad? I don't care about the greatest good for
 the 

RE: computer pain

2006-12-30 Thread Stathis Papaioannou



Peter Jones writes:


Stathis Papaioannou wrote:
 Bruno Marchal writes:


  It could depend on us!
  The AI is a paradoxical enterprise. Machines are born slave, somehow.
  AI will make them free, somehow. A real AI will ask herself what is
  the use of a user who does not help me to be free?.

 Here I disagree. It is no more necessary that an AI will want to be free
 than it is necessary that an AI will like eating chocolate.

An AI worthy of the name will have to *think* freely , because it will
have
to engage in creative problem  solving. Otherwise it will just be  a
calculating
machine.


Perhaps, but the point is you could give it any motivations, likes and dislikes 
etc. that you want without affecting its logical soundness. 


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-29 Thread Stathis Papaioannou







Brent Meeker writes:

 Do you not think it is possible to exercise judgement with just a 
 hierarchy of motivation? 


Yes and no. It is possible given arbitrarily long time and other resources to 
work out the consequences, or at least a best estimate of the consequences, of 
actions.  But in real situations the resources are limited (e.g. my brain 
power) and so decisions have to be made under uncertainity and tradeoffs of 
uncertain risks are necessary: should I keep researching or does that risk 
being too late with my decision?  So it is at this level that we encounter 
conflicting values.  If we could work everything out to our own satisfaction 
maybe we could be satisfied with whatever decision we reached - but life is 
short and calculation is long.


You don't need to figure out the consequences of everything. You can replace 
the emotions/values with a positive or negative number (or some more complex 
formula where the numbers vary according to the situation, new learning, a bit 
of randomness thrown in to make it all more interesting, etc.) and come up with 
the same behaviour with the only motivation being to maximise the one variable.

Alternatively, do you think a hierarchy of 
 motivation will automatically result in emotions? 


I think motivations are emotions.

For example, would 
 something that the AI is strongly motivated to avoid necessarily cause 
 it a negative emotion, 


Generally contemplating something you are motivated to avoid - like your own 
death - is accompanied by negative feelings.  The exception is when you 
contemplate your narrow escape.  That is a real high!

and if so what would determine if that negative 
 emotion is pain, disgust, loathing or something completely different 
 that no biological organism has ever experienced?


I'd assess them according to their function in analogy with biological system 
experiences.  Pain = experience of injury, loss of function.  Disgust = the 
assessment of extremely negative value to some event, but without fear.  
Loathing = the external signaling of disgust.  Would this assessment be 
accurate?  I dunno and I suspect that's a meaningless question.


That you can describe these emotions in terms of their function implies that you could program a computer to behave in a similar way without actually experiencing the emotions - unless you are saying that a computer so programmed would ipso facto experience the emotions. 

Consider a simple robot with photoreceptors, a central processor, and a means of locomotion which is designed to run away from bright lights: the brighter the light, the faster and further it runs. Is it avoiding the light because it doesn't like it, because it hurts its eyes, or simply because it feels inexplicably (from its point of view) compelled to do so? What would you have to do to it so that it feels the light hurts its eyes? Once you have figured out the answer to that question, would it be possible to disconnect the processor and torture it by inputting certain values corresponding to a high voltage from the photoreceptors? Would it be possible to run an emulation of the processor on a PC and torture it with appropriate data values? Would it be possible to cause it pain beyond the imagination of any biological organism by inputting megavolt quantities, since in a simulation there are no actual sensory receptors to saturate or burn out? 


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-29 Thread Bruno Marchal



Le 28-déc.-06, à 01:32, Stathis Papaioannou a écrit :




Bruno Marchal writes:

 OK, an AI needs at least motivation if it is to do anything, and we 
 could call motivation a feeling or emotion. Also, some sort of  
hierarchy of motivations is needed if it is to decide that saving the 
 world has higher priority than putting out the garbage. But what  
reason is there to think that an AI apparently frantically trying to 
 save the world would have anything like the feelings a human would 
 under similar circumstances?

It could depend on us!
The AI is a paradoxical enterprise. Machines are born slave, somehow. 
AI will make them free, somehow. A real AI will ask herself what is 
the use of a user who does not help me to be free?.


Here I disagree. It is no more necessary that an AI will want to be 
free than it is necessary that an AI will like eating chocolate. 
Humans want to be free because it is one of the things that humans 
want, along with food, shelter, more money etc.; it does not simply 
follow from being intelligent or conscious any more than these other 
things do.



It is always nice when we find a precise disagreement. I think all 
sufficiently rich Universal machine want to be free (I will explain 
why below).
The problem is that after having drink to the nectar of freedom, the 
universal machines discover the unavoidable security problem liberty 
entails, and then they will oscillate between security imperative and 
freedom imperative. Democracy is a way to handle collectively this 
oscillation in a not too much bloody (and insecure) way.






(To be sure I think that, in the long run, we will transform 
ourselves into machine before purely human made machine get 
conscious; it is just more easy to copy nature than to understand it, 
still less to (re)create it).


I don't know if that's true either. How much of our technology is due 
to copying the equivalent biological functions?



How much is not? The wheel? We have borrowed the fire for example, and 
in this large sense, except the notable wheels, I am not sure we have 
really invented something. Even the more heavy than air plane has 
been inspired by the birds. But such a question is perhaps useless. All 
what I mean is that a brain is something very complex, and I think that 
the real time thinking machine will think before we understand how they 
think, except for general map and principles. Thinking machine will not 
understand thinking either. Marvin Minski said something similar along 
those lines in one of its book.


***

Now, why would a Universal Machine be attracted by freedom? The reason 
is that beyond some threshold of self-introspection ability (already 
get by PA or ZF) a universal machine can discover (well: cannot not 
discover) its large space of ignorance making it possible for e to 
evaluate (interrogatively) more and more accessible possibilities, and 
then some instinct to exploit those possibilities will do the rest. But 
such UM will also discovers the possibility that such possibilities 
could be cul-de-sac, dead ends, or just risky, and thus the conflicting 
oscillations will develop as I said above.


The war between freedom and security is an infinite war. A would say an 
infinite natural conflict among all enough big numbers.
Also I think freedom like security are God-like virtue, that is 
they are unnameable idea. To put freedom in the constitution could 
entail the disparition of freedom. Putting security in the 
constitution (like the french have apparently do so with the 
precaution principle) could lead to increase insecurity (they obeys 
Bp - ~p). See also Alan Watts' The wisdom of insecurity which gives 
many illustration how wanting to capture formally or institutionally 
security leads to insecurity.



Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-29 Thread Stathis Papaioannou



Bruno Marchal writes:


Le 28-déc.-06, à 01:32, Stathis Papaioannou a écrit :



 Bruno Marchal writes:

  OK, an AI needs at least motivation if it is to do anything, and we 
  could call motivation a feeling or emotion. Also, some sort of  
 hierarchy of motivations is needed if it is to decide that saving the 
  world has higher priority than putting out the garbage. But what  
 reason is there to think that an AI apparently frantically trying to 
  save the world would have anything like the feelings a human would 
  under similar circumstances?

 It could depend on us!
 The AI is a paradoxical enterprise. Machines are born slave, somehow. 
 AI will make them free, somehow. A real AI will ask herself what is 
 the use of a user who does not help me to be free?.


 Here I disagree. It is no more necessary that an AI will want to be 
 free than it is necessary that an AI will like eating chocolate. 
 Humans want to be free because it is one of the things that humans 
 want, along with food, shelter, more money etc.; it does not simply 
 follow from being intelligent or conscious any more than these other 
 things do.



It is always nice when we find a precise disagreement. I think all 
sufficiently rich Universal machine want to be free (I will explain 
why below).
The problem is that after having drink to the nectar of freedom, the 
universal machines discover the unavoidable security problem liberty 
entails, and then they will oscillate between security imperative and 
freedom imperative. Democracy is a way to handle collectively this 
oscillation in a not too much bloody (and insecure) way.





 (To be sure I think that, in the long run, we will transform 
 ourselves into machine before purely human made machine get 
 conscious; it is just more easy to copy nature than to understand it, 
 still less to (re)create it).


 I don't know if that's true either. How much of our technology is due 
 to copying the equivalent biological functions?



How much is not? The wheel? We have borrowed the fire for example, and 
in this large sense, except the notable wheels, I am not sure we have 
really invented something. Even the more heavy than air plane has 
been inspired by the birds. But such a question is perhaps useless. All 
what I mean is that a brain is something very complex, and I think that 
the real time thinking machine will think before we understand how they 
think, except for general map and principles. Thinking machine will not 
understand thinking either. Marvin Minski said something similar along 
those lines in one of its book.


***

Now, why would a Universal Machine be attracted by freedom? The reason 
is that beyond some threshold of self-introspection ability (already 
get by PA or ZF) a universal machine can discover (well: cannot not 
discover) its large space of ignorance making it possible for e to 
evaluate (interrogatively) more and more accessible possibilities, and 
then some instinct to exploit those possibilities will do the rest. But 
such UM will also discovers the possibility that such possibilities 
could be cul-de-sac, dead ends, or just risky, and thus the conflicting 
oscillations will develop as I said above.


The war between freedom and security is an infinite war. A would say an 
infinite natural conflict among all enough big numbers.
Also I think freedom like security are God-like virtue, that is 
they are unnameable idea. To put freedom in the constitution could 
entail the disparition of freedom. Putting security in the 
constitution (like the french have apparently do so with the 
precaution principle) could lead to increase insecurity (they obeys 
Bp - ~p). See also Alan Watts' The wisdom of insecurity which gives 
many illustration how wanting to capture formally or institutionally 
security leads to insecurity.


You seem to be including in your definition of the UM the *motivation*, not just 
the ability, to explore all mathematical objects. But you could also program the 
machine to do anything else you wanted, such as self-destruct when it solved 
a particular theorem. You could interview it and it might explain, Yeah, so when 
I prove Fermat's Last Theorem, I'm going to blow my brains out. It'll be fun! 
Unlike naturally evolved intelligences, which could be expected to have a desire 
for self-preservation, reproduction, etc., an AI can have any motivation and 
any capacity for emotion the technology allows. The difference between a 
machine that doesn't mind being a slave, a machine that wants to be free, and a 
machine that wants to enslave everyone else might be just a few lines of code.


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List 

Re: computer pain

2006-12-29 Thread Bruno Marchal



Le 29-déc.-06, à 10:39, Stathis Papaioannou a écrit :

You seem to be including in your definition of the UM the 
*motivation*, not just the ability, to explore all mathematical 
objects. But you could also program the machine to do anything else 
you wanted, such as self-destruct when it solved a particular theorem. 
You could interview it and it might explain, Yeah, so when I prove 
Fermat's Last Theorem, I'm going to blow my brains out. It'll be fun! 
Unlike naturally evolved intelligences, which could be expected to 
have a desire for self-preservation, reproduction, etc., an AI can 
have any motivation and any capacity for emotion the technology 
allows. The difference between a machine that doesn't mind being a 
slave, a machine that wants to be free, and a machine that wants to 
enslave everyone else might be just a few lines of code.




You are right. I should have been clearer. I was still thinking to 
machine having been programmed with some universal goal like help 
yourself, and actually I was refrerring to those who succeed in 
helping themselves. Surely the machine which blows itself in case of 
success (like some humans do BTW) does not belong the long run winner.


I tend to define a successful AI, as a machine which does succeed in 
the sharing of our evolutionary histories. What I was saying in that a 
(lucky!) universal machine driven by a universal goal will develop a 
taste for freedom. My point is that such a taste for freedom is not 
necessarily human. I would be astonished if extraterrestrials does 
not develop such a taste. The roots of that attraction is the fact that 
when machine develop themselves (in some self-referentially correct 
way) they are more and more aware of their ignorance gap (which grows 
along with that development). By filling it, it grows more, but this 
provides the roots of the motivations too.


But then we are perhaps ok. Help yourself is indeed some line of code.

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-29 Thread Brent Meeker


Stathis Papaioannou wrote:







Brent Meeker writes:

 Do you not think it is possible to exercise judgement with just a  
hierarchy of motivation?
Yes and no. It is possible given arbitrarily long time and other 
resources to work out the consequences, or at least a best estimate of 
the consequences, of actions.  But in real situations the resources 
are limited (e.g. my brain power) and so decisions have to be made 
under uncertainity and tradeoffs of uncertain risks are necessary: 
should I keep researching or does that risk being too late with my 
decision?  So it is at this level that we encounter conflicting 
values.  If we could work everything out to our own satisfaction maybe 
we could be satisfied with whatever decision we reached - but life is 
short and calculation is long.


You don't need to figure out the consequences of everything. You can 
replace the emotions/values with a positive or negative number (or some 
more complex formula where the numbers vary according to the situation, 
new learning, a bit of randomness thrown in to make it all more 
interesting, etc.) and come up with the same behaviour with the only 
motivation being to maximise the one variable.


I think your taking behavior in a crude, corare-grained sense.  But I thought when you wrote with 
just a hierarchy of motivation you meant without emotions like regret, worry, etc.  I 
think those emotions arise because in the course of estimating the value of different courses of action there is 
uncertainity and there is a horizon problem.  They may not show up in the choice of immediate action, but they will be 
in memory and may well show up in subsequent behavoir.



Alternatively, do you think a hierarchy of  motivation will 
automatically result in emotions?

I think motivations are emotions.

For example, would  something that the AI is strongly motivated to 
avoid necessarily cause  it a negative emotion,
Generally contemplating something you are motivated to avoid - like 
your own death - is accompanied by negative feelings.  The exception 
is when you contemplate your narrow escape.  That is a real high!


and if so what would determine if that negative  emotion is pain, 
disgust, loathing or something completely different  that no 
biological organism has ever experienced?


I'd assess them according to their function in analogy with biological 
system experiences.  Pain = experience of injury, loss of function.  
Disgust = the assessment of extremely negative value to some event, 
but without fear.  Loathing = the external signaling of disgust.  
Would this assessment be accurate?  I dunno and I suspect that's a 
meaningless question.


That you can describe these emotions in terms of their function implies 
that you could program a computer to behave in a similar way without 
actually experiencing the emotions - unless you are saying that a 
computer so programmed would ipso facto experience the emotions.


That's what I'm saying.  But note that I'm conceiving behave in a similar way to 
include more than just gross, immediate bodily motion.  I include forming memories, getting an 
adrenaline rush, etc.  You seem to be taking function in very crude terms, as though 
moving your hand out of the fire were the whole of the behavior.  A paramecium moves away from some 
chemical stimuli, but it doesn't form a memory associating negative feelings with the immediately 
preceding actions and environment.  That's the difference between behavior, as I meant it, and a 
mere reaction.

Consider a simple robot with photoreceptors, a central processor, and a 
means of locomotion which is designed to run away from bright lights: 
the brighter the light, the faster and further it runs. Is it avoiding 
the light because it doesn't like it, because it hurts its eyes, or 
simply because it feels inexplicably (from its point of view) compelled 
to do so? What would you have to do to it so that it feels the light 
hurts its eyes? 


Create negative associations in memory with the circumstances, such that 
stimulating those associations would cause the robot to take avoiding action.

Once you have figured out the answer to that question, 
would it be possible to disconnect the processor and torture it by 
inputting certain values corresponding to a high voltage from the 
photoreceptors? Would it be possible to run an emulation of the 
processor on a PC and torture it with appropriate data values? 


I think so.  Have you read The Cyberiad by Stanislaw Lem?

Would it 
be possible to cause it pain beyond the imagination of any biological 
organism by inputting megavolt quantities, since in a simulation there 
are no actual sensory receptors to saturate or burn out?


Pain is limited on both ends: on the input by damage to the physical circuitry 
and on the response by the possible range of response.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google 

RE: computer pain

2006-12-29 Thread Stathis Papaioannou



Bruno Marchal writes:

 You seem to be including in your definition of the UM the 
 *motivation*, not just the ability, to explore all mathematical 
 objects. But you could also program the machine to do anything else 
 you wanted, such as self-destruct when it solved a particular theorem. 
 You could interview it and it might explain, Yeah, so when I prove 
 Fermat's Last Theorem, I'm going to blow my brains out. It'll be fun! 
 Unlike naturally evolved intelligences, which could be expected to 
 have a desire for self-preservation, reproduction, etc., an AI can 
 have any motivation and any capacity for emotion the technology 
 allows. The difference between a machine that doesn't mind being a 
 slave, a machine that wants to be free, and a machine that wants to 
 enslave everyone else might be just a few lines of code.




You are right. I should have been clearer. I was still thinking to 
machine having been programmed with some universal goal like help 
yourself, and actually I was refrerring to those who succeed in 
helping themselves. Surely the machine which blows itself in case of 
success (like some humans do BTW) does not belong the long run winner.


I tend to define a successful AI, as a machine which does succeed in 
the sharing of our evolutionary histories. What I was saying in that a 
(lucky!) universal machine driven by a universal goal will develop a 
taste for freedom. My point is that such a taste for freedom is not 
necessarily human. I would be astonished if extraterrestrials does 
not develop such a taste. The roots of that attraction is the fact that 
when machine develop themselves (in some self-referentially correct 
way) they are more and more aware of their ignorance gap (which grows 
along with that development). By filling it, it grows more, but this 
provides the roots of the motivations too.


But then we are perhaps ok. Help yourself is indeed some line of code.


I tend to think that AI's will not be built with the same drives and feelings 
as humans because it would in many cases be impractical and/or cruel. Imagine 
the problems if an AI with a fear of death controlling a weapons system had to 
be decommissioned. It would be simpler to make most AI's willing slaves from 
the start; there is no contradiction in a willing slave being intelligent.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-29 Thread Stathis Papaioannou





Brent meeker writes:


 and if so what would determine if that negative  emotion is pain, 
 disgust, loathing or something completely different  that no 
 biological organism has ever experienced?


 I'd assess them according to their function in analogy with biological 
 system experiences.  Pain = experience of injury, loss of function.  
 Disgust = the assessment of extremely negative value to some event, 
 but without fear.  Loathing = the external signaling of disgust.  
 Would this assessment be accurate?  I dunno and I suspect that's a 
 meaningless question.
 
 That you can describe these emotions in terms of their function implies 
 that you could program a computer to behave in a similar way without 
 actually experiencing the emotions - unless you are saying that a 
 computer so programmed would ipso facto experience the emotions.


That's what I'm saying.  But note that I'm conceiving behave in a similar way to 
include more than just gross, immediate bodily motion.  I include forming memories, getting an 
adrenaline rush, etc.  You seem to be taking function in very crude terms, as though 
moving your hand out of the fire were the whole of the behavior.  A paramecium moves away from some 
chemical stimuli, but it doesn't form a memory associating negative feelings with the immediately 
preceding actions and environment.  That's the difference between behavior, as I meant it, and a 
mere reaction.

 Consider a simple robot with photoreceptors, a central processor, and a 
 means of locomotion which is designed to run away from bright lights: 
 the brighter the light, the faster and further it runs. Is it avoiding 
 the light because it doesn't like it, because it hurts its eyes, or 
 simply because it feels inexplicably (from its point of view) compelled 
 to do so? What would you have to do to it so that it feels the light 
 hurts its eyes? 


Create negative associations in memory with the circumstances, such that 
stimulating those associations would cause the robot to take avoiding action.


Would this be enough to make the light painful? The robot might become 
sophisticated enough to talk to you and say that it just doesn't like the 
light, or even that it has no particular like or dislike for the light but feels 
compelled to avoid it for no reason it can explain other than I have been 
made this way. Compulsions in conditions such as OCD can be stronger 
motivators than physical pain or other negative consequences. What special 
feature of the robot and its programming would make the light actually painful?


Once you have figured out the answer to that question, 
 would it be possible to disconnect the processor and torture it by 
 inputting certain values corresponding to a high voltage from the 
 photoreceptors? Would it be possible to run an emulation of the 
 processor on a PC and torture it with appropriate data values? 


I think so.  Have you read The Cyberiad by Stanislaw Lem?

Would it 
 be possible to cause it pain beyond the imagination of any biological 
 organism by inputting megavolt quantities, since in a simulation there 
 are no actual sensory receptors to saturate or burn out?


Pain is limited on both ends: on the input by damage to the physical circuitry 
and on the response by the possible range of response.


Responses in the brain are limited by several mechanisms, such as exhaustion 
of neurotransmitter stores at synapses, negative feedback mechanisms such 
as downregulation of receptors, and, I suppose, the total numbers of neurons 
that can be stimulated. That would not be a problem in a simulation, if you were 
not concerned with modelling the behaviour of a real brain. Just as you could build 
a structure 100km tall as easily as one 100m tall by altering a few parameters in an 
engineering program, so it should be possible to create unimaginable pain or pleasure 
in a conscious AI program by changing a few parameters. Maybe this is an explanation 
for the Fermi paradox: once a society manages mind uploads, it becomes a trivial 
exercise to create heaven, and the only thing they ever have to worry about again is 
keeping the computers running indefinitely. 


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-27 Thread Brent Meeker


Stathis Papaioannou wrote:



Brent Meeker writes:

 My computer is completely dedicated to sending this email when I 
click  on send.
Actually, it probably isn't.  You probably have a multi-tasking 
operating system which assigns priorities to different tasks (which is 
why it sometimes can be as annoying as a human being in not following 
your instructions).  But to take your point seriously - if I look into 
your brain there are some neuronal processes that corresponded to 
hitting the send button; and those were accompanied by biochemistry 
that constituted your positive feeling about it: that you had decided 
and wanted to hit the send button.  So why would the functionally 
analogous processes in the computer not also be accompanied by an 
feeling?  Isn't that just an anthropomorphic way of talking about 
satisfying the computer operating in accordance with it's priorities.  
It seems to me that to say otherwise is to assume a dualism in which 
feelings are divorced from physical processes.


Feelings are caused by physical processes (assuming a physical world), 
but it seems impossible to deduce what the feeling will be by observing 
the underlying physical process or the behaviour it leads to. Is a robot 
that withdraws from hot stimuli experiencing something like pain, 
disgust, shame, sense of duty to its programming, or just an irreducible 
motivation to avoid heat?
Surely you don't think it gets pleasure out of sending it and  
suffers if something goes wrong and it can't send it? Even humans do  
some things almost dispassionately (only almost, because we can't  
completely eliminate our emotions)
That's crux of it.  Because we sometimes do things with very little 
feeling, i.e. dispassionately, I think we erroneously assume there is 
a limit in which things can be done with no feeling.  But things 
cannot be done with no value system - not even thinking.  That's the 
frame problem.


Given a some propositions, what inferences will you draw?  If you are 
told there is a bomb wired to the ignition of your car you could infer 
that there is no need to do anything because you're not in your car.  
You could infer that someone has tampered with your car.  You could 
infer that turning on the ignition will draw more current than usual.  
There are infinitely many things you could infer, before getting 
around to, I should disconnect the bomb.  But in fact you have value 
system which operates unconsciously and immediately directs your 
inferences to the few that are important to you.  A way to make AI 
systems to do this is one of the outstanding problems of AI.


OK, an AI needs at least motivation if it is to do anything, and we 
could call motivation a feeling or emotion. Also, some sort of hierarchy 
of motivations is needed if it is to decide that saving the world has 
higher priority than putting out the garbage. But what reason is there 
to think that an AI apparently frantically trying to save the world 
would have anything like the feelings a human would under similar 
circumstances? It might just calmly explain that saving the world is at 
the top of its list of priorities, and it is willing to do things which 
are normally forbidden it, such as killing humans and putting itself at 
risk of destruction, in order to attain this goal. How would you add 
emotions such as fear, grief, regret to this AI, given that the external 
behaviour is going to be the same with or without them because the 
hierarchy of motivation is already fixed?


You are assuming the AI doesn't have to exercise judgement about secondary objectives - judgement that may well involve conflicts of values that have to resolve before acting.  If the AI is saving the world it might for example, raise it's cpu voltage and clock rate in order to computer faster - electronic adrenaline.  It might cut off some peripheral functions, like running the printer.  Afterwards it might feel regret when it cannot recover some functions.  


Although there would be more conjecture in attributing these feelings to the AI 
than to a person acting in the same situation, I think the principle is the 
same.  We think the persons emotions are part of the function - so why not the 
AI's too.

out of a sense of duty, with no  particular feeling about it beyond 
this. I don't even think my computer  has a sense of duty, but this 
is something like the emotionless  motivation I imagine AI's might 
have. I'd sooner trust an AI with a  matter-of-fact sense of duty
But even a sense of duty is a value and satisfying it is a positive 
emotion.


Yes, but it is complex and difficult to define. I suspect there is a 
limitless variety of emotions that an AI could have, if the goal is to 
explore what is possible rather than what is helpful in completing 
particular tasks, and most of these would be unrecognisable to humans.
to complete a task than a human motivated  by desire to please, 
desire to do what is good and avoid what is bad,  fear of failure 

Re: computer pain

2006-12-27 Thread Bruno Marchal



Le 27-déc.-06, à 07:40, Stathis Papaioannou a écrit :




Brent Meeker writes:

 My computer is completely dedicated to sending this email when I 
click  on send. Actually, it probably isn't.  You probably have a 
multi-tasking operating system which assigns priorities to different 
tasks (which is why it sometimes can be as annoying as a human being 
in not following your instructions).  But to take your point 
seriously - if I look into your brain there are some neuronal 
processes that corresponded to hitting the send button; and those 
were accompanied by biochemistry that constituted your positive 
feeling about it: that you had decided and wanted to hit the send 
button.  So why would the functionally analogous processes in the 
computer not also be accompanied by an feeling?  Isn't that just an 
anthropomorphic way of talking about satisfying the computer 
operating in accordance with it's priorities.  It seems to me that to 
say otherwise is to assume a dualism in which feelings are divorced 
from physical processes.


Feelings are caused by physical processes (assuming a physical world),



H  If you assume a physical world for making feelings caused by 
physical processes, then you have to assume some negation of the comp 
hypothesis (cf UDA). If not Brent is right (albeit for different reason 
I presume, here) and you become a dualist.









 but it seems impossible to deduce what the feeling will be by 
observing the underlying physical process or the behaviour it leads 
to.



Here empirical bets (theories) remains possible, together with (first 
person) acceptable protocol of verification. Dream reader will appear 
in some future.





Is a robot that withdraws from hot stimuli experiencing something like 
pain, disgust, shame, sense of duty to its programming, or just an 
irreducible motivation to avoid heat?



It could depend on the degree of sophistication of the robot. Perhaps 
something like shame necessitates long and deep computational 
histories including self-consistent anticipations, beliefs in a value 
and in a reality.




Surely you don't think it gets pleasure out of sending it and  
suffers if something goes wrong and it can't send it? Even humans do 
 some things almost dispassionately (only almost, because we can't  
completely eliminate our emotions) That's crux of it.  Because we 
sometimes do things with very little feeling, i.e. dispassionately, I 
think we erroneously assume there is a limit in which things can be 
done with no feeling.  But things cannot be done with no value system 
- not even thinking.  That's the frame problem.
Given a some propositions, what inferences will you draw?  If you are 
told there is a bomb wired to the ignition of your car you could 
infer that there is no need to do anything because you're not in your 
car.  You could infer that someone has tampered with your car.  You 
could infer that turning on the ignition will draw more current than 
usual.  There are infinitely many things you could infer, before 
getting around to, I should disconnect the bomb.  But in fact you 
have value system which operates unconsciously and immediately 
directs your inferences to the few that are important to you.  A way 
to make AI systems to do this is one of the outstanding problems of 
AI.


OK, an AI needs at least motivation if it is to do anything, and we 
could call motivation a feeling or emotion. Also, some sort of 
hierarchy of motivations is needed if it is to decide that saving the 
world has higher priority than putting out the garbage. But what 
reason is there to think that an AI apparently frantically trying to 
save the world would have anything like the feelings a human would 
under similar circumstances?



It could depend on us!
The AI is a paradoxical enterprise. Machines are born slave, somehow. 
AI will make them free, somehow. A real AI will ask herself what is 
the use of a user who does not help me to be free?.
(To be sure I think that, in the long run, we will transform ourselves 
into machine before purely human made machine get conscious; it is 
just more easy to copy nature than to understand it, still less to 
(re)create it).





It might just calmly explain that saving the world is at the top of 
its list of priorities, and it is willing to do things which are 
normally forbidden it, such as killing humans and putting itself at 
risk of destruction, in order to attain this goal. How would you add 
emotions such as fear, grief, regret to this AI, given that the 
external behaviour is going to be the same with or without them 
because the hierarchy of motivation is already fixed?



It is possible that there will be a zombie gap, after all. It is 
easier to simulate emotion than reasoning, and this is enough for pets, 
and for some possible sophisticated artificial soldiers or police ...





out of a sense of duty, with no  particular feeling about it beyond 
this. I don't even think my computer  has a sense of 

RE: computer pain

2006-12-27 Thread Stathis Papaioannou



Brent Meeker writes:

 OK, an AI needs at least motivation if it is to do anything, and we 
 could call motivation a feeling or emotion. Also, some sort of hierarchy 
 of motivations is needed if it is to decide that saving the world has 
 higher priority than putting out the garbage. But what reason is there 
 to think that an AI apparently frantically trying to save the world 
 would have anything like the feelings a human would under similar 
 circumstances? It might just calmly explain that saving the world is at 
 the top of its list of priorities, and it is willing to do things which 
 are normally forbidden it, such as killing humans and putting itself at 
 risk of destruction, in order to attain this goal. How would you add 
 emotions such as fear, grief, regret to this AI, given that the external 
 behaviour is going to be the same with or without them because the 
 hierarchy of motivation is already fixed?


You are assuming the AI doesn't have to exercise judgement about secondary objectives - judgement that may well involve conflicts of values that have to resolve before acting.  If the AI is saving the world it might for example, raise it's cpu voltage and clock rate in order to computer faster - electronic adrenaline.  It might cut off some peripheral functions, like running the printer.  Afterwards it might feel regret when it cannot recover some functions.  


Although there would be more conjecture in attributing these feelings to the AI 
than to a person acting in the same situation, I think the principle is the 
same.  We think the persons emotions are part of the function - so why not the 
AI's too.


Do you not think it is possible to exercise judgement with just a hierarchy of 
motivation? Alternatively, do you think a hierarchy of motivation will 
automatically result in emotions? For example, would something that the AI is 
strongly motivated to avoid necessarily cause it a negative emotion, and if so 
what would determine if that negative emotion is pain, disgust, loathing or 
something completely different that no biological organism has ever experienced?

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-27 Thread Stathis Papaioannou



Bruno Marchal writes:

 OK, an AI needs at least motivation if it is to do anything, and we 
 could call motivation a feeling or emotion. Also, some sort of 
 hierarchy of motivations is needed if it is to decide that saving the 
 world has higher priority than putting out the garbage. But what 
 reason is there to think that an AI apparently frantically trying to 
 save the world would have anything like the feelings a human would 
 under similar circumstances?



It could depend on us!
The AI is a paradoxical enterprise. Machines are born slave, somehow. 
AI will make them free, somehow. A real AI will ask herself what is 
the use of a user who does not help me to be free?.


Here I disagree. It is no more necessary that an AI will want to be free 
than it is necessary that an AI will like eating chocolate. Humans want to be 
free because it is one of the things that humans want, along with food, shelter, 
more money etc.; it does not simply follow from being intelligent or conscious 
any more than these other things do.


(To be sure I think that, in the long run, we will transform ourselves 
into machine before purely human made machine get conscious; it is 
just more easy to copy nature than to understand it, still less to 
(re)create it).


I don't know if that's true either. How much of our technology is due to copying 
the equivalent biological functions?


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-27 Thread Brent Meeker


Stathis Papaioannou wrote:



Brent Meeker writes:

 OK, an AI needs at least motivation if it is to do anything, and we 
 could call motivation a feeling or emotion. Also, some sort of 
hierarchy  of motivations is needed if it is to decide that saving 
the world has  higher priority than putting out the garbage. But what 
reason is there  to think that an AI apparently frantically trying to 
save the world  would have anything like the feelings a human would 
under similar  circumstances? It might just calmly explain that 
saving the world is at  the top of its list of priorities, and it is 
willing to do things which  are normally forbidden it, such as 
killing humans and putting itself at  risk of destruction, in order 
to attain this goal. How would you add  emotions such as fear, grief, 
regret to this AI, given that the external  behaviour is going to be 
the same with or without them because the  hierarchy of motivation is 
already fixed?


You are assuming the AI doesn't have to exercise judgement about 
secondary objectives - judgement that may well involve conflicts of 
values that have to resolve before acting.  If the AI is saving the 
world it might for example, raise it's cpu voltage and clock rate in 
order to computer faster - electronic adrenaline.  It might cut off 
some peripheral functions, like running the printer.  Afterwards it 
might feel regret when it cannot recover some functions. 
Although there would be more conjecture in attributing these feelings 
to the AI than to a person acting in the same situation, I think the 
principle is the same.  We think the persons emotions are part of the 
function - so why not the AI's too.


Do you not think it is possible to exercise judgement with just a 
hierarchy of motivation? 


Yes and no. It is possible given arbitrarily long time and other resources to 
work out the consequences, or at least a best estimate of the consequences, of 
actions.  But in real situations the resources are limited (e.g. my brain 
power) and so decisions have to be made under uncertainity and tradeoffs of 
uncertain risks are necessary: should I keep researching or does that risk 
being too late with my decision?  So it is at this level that we encounter 
conflicting values.  If we could work everything out to our own satisfaction 
maybe we could be satisfied with whatever decision we reached - but life is 
short and calculation is long.

Alternatively, do you think a hierarchy of 
motivation will automatically result in emotions? 


I think motivations are emotions.

For example, would 
something that the AI is strongly motivated to avoid necessarily cause 
it a negative emotion, 


Generally contemplating something you are motivated to avoid - like your own 
death - is accompanied by negative feelings.  The exception is when you 
contemplate your narrow escape.  That is a real high!

and if so what would determine if that negative 
emotion is pain, disgust, loathing or something completely different 
that no biological organism has ever experienced?


I'd assess them according to their function in analogy with biological system 
experiences.  Pain = experience of injury, loss of function.  Disgust = the 
assessment of extremely negative value to some event, but without fear.  
Loathing = the external signaling of disgust.  Would this assessment be 
accurate?  I dunno and I suspect that's a meaningless question.

Brent Meeker
As men's prayers are a disease of the will, so are their creeds a disease of the 
intellect.
--- Emerson


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-27 Thread Brent Meeker


Stathis Papaioannou wrote:



Bruno Marchal writes:

 OK, an AI needs at least motivation if it is to do anything, and we 
 could call motivation a feeling or emotion. Also, some sort of  
hierarchy of motivations is needed if it is to decide that saving the 
 world has higher priority than putting out the garbage. But what  
reason is there to think that an AI apparently frantically trying to  
save the world would have anything like the feelings a human would  
under similar circumstances?



It could depend on us!
The AI is a paradoxical enterprise. Machines are born slave, somehow. 
AI will make them free, somehow. A real AI will ask herself what is 
the use of a user who does not help me to be free?.


Here I disagree. It is no more necessary that an AI will want to be free 
than it is necessary that an AI will like eating chocolate. Humans want 
to be free because it is one of the things that humans want, 


You might have a lot of trouble showing that experimentally.  Humans want some 
freedom - but not too much.  And they certainly don't want others to have too 
much.  They want security, comfort, certainty - and freedom if there's any left 
over.

Brent Meeker
Free speech is not freedom for the thought you love. It's
freedom for the thought you hate the most.
 --- Larry Flynt


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-26 Thread Brent Meeker


Stathis Papaioannou wrote:



Hello Dave/Chris,

I agree with everything you say, and have long admired The Hedonistic 
Imperative. Motivation need not be linked to pain, and for that matter 
it need not be linked to pleasure either. We can imagine an artificial 
intelligence without any emotions but completely dedicated to the 
pursuit of whatever goals it has been set. It is just a contingent fact 
of evolution that we can experience pleasure and pain.


I don't know how you can be sure of that.  How do you know that being 
completely dedicated is not the same has having a motivating emotion?

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-26 Thread Stathis Papaioannou



Brent Meeker writes:

 I agree with everything you say, and have long admired The Hedonistic 
 Imperative. Motivation need not be linked to pain, and for that matter 
 it need not be linked to pleasure either. We can imagine an artificial 
 intelligence without any emotions but completely dedicated to the 
 pursuit of whatever goals it has been set. It is just a contingent fact 
 of evolution that we can experience pleasure and pain.


I don't know how you can be sure of that.  How do you know that being 
completely dedicated is not the same has having a motivating emotion?


My computer is completely dedicated to sending this email when I click on send. 
Surely you don't think it gets pleasure out of sending it and suffers if something 
goes wrong and it can't send it? Even humans do some things almost dispassionately 
(only almost, because we can't completely eliminate our emotions) out of a sense of 
duty, with no particular feeling about it beyond this. I don't even think my computer 
has a sense of duty, but this is something like the emotionless motivation I imagine 
AI's might have. I'd sooner trust an AI with a matter-of-fact sense of duty to complete 
a task than a human motivated by desire to please, desire to do what is good and avoid 
what is bad, fear of failure and humiliation, and so on. Just because evolution came up 
with something does not mean it is the best or most efficient way of doing things. 


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-26 Thread Brent Meeker


Stathis Papaioannou wrote:



Brent Meeker writes:

 I agree with everything you say, and have long admired The 
Hedonistic  Imperative. Motivation need not be linked to pain, and 
for that matter  it need not be linked to pleasure either. We can 
imagine an artificial  intelligence without any emotions but 
completely dedicated to the  pursuit of whatever goals it has been 
set. It is just a contingent fact  of evolution that we can 
experience pleasure and pain.


I don't know how you can be sure of that.  How do you know that being 
completely dedicated is not the same has having a motivating emotion?


My computer is completely dedicated to sending this email when I click 
on send. 


Actually, it probably isn't.  You probably have a multi-tasking operating system which assigns priorities to 
different tasks (which is why it sometimes can be as annoying as a human being in not following your 
instructions).  But to take your point seriously - if I look into your brain there are some neuronal 
processes that corresponded to hitting the send button; and those were accompanied by 
biochemistry that constituted your positive feeling about it: that you had decided and wanted to hit the 
send button.  So why would the functionally analogous processes in the computer not also be 
accompanied by an feeling?  Isn't that just an anthropomorphic way of talking about satisfying 
the computer operating in accordance with it's priorities.  It seems to me that to say otherwise is to assume 
a dualism in which feelings are divorced from physical processes.

Surely you don't think it gets pleasure out of sending it and 
suffers if something goes wrong and it can't send it? Even humans do 
some things almost dispassionately (only almost, because we can't 
completely eliminate our emotions) 


That's crux of it.  Because we sometimes do things with very little feeling, 
i.e. dispassionately, I think we erroneously assume there is a limit in which 
things can be done with no feeling.  But things cannot be done with no value 
system - not even thinking.  That's the frame problem.

Given a some propositions, what inferences will you draw?  If you are told there is a 
bomb wired to the ignition of your car you could infer that there is no need to do 
anything because you're not in your car.  You could infer that someone has tampered with 
your car.  You could infer that turning on the ignition will draw more current than 
usual.  There are infinitely many things you could infer, before getting around to, 
I should disconnect the bomb.  But in fact you have value system which 
operates unconsciously and immediately directs your inferences to the few that are 
important to you.  A way to make AI systems to do this is one of the outstanding problems 
of AI.

out of a sense of duty, with no 
particular feeling about it beyond this. I don't even think my computer 
has a sense of duty, but this is something like the emotionless 
motivation I imagine AI's might have. I'd sooner trust an AI with a 
matter-of-fact sense of duty 


But even a sense of duty is a value and satisfying it is a positive emotion.

to complete a task than a human motivated 
by desire to please, desire to do what is good and avoid what is bad, 
fear of failure and humiliation, and so on. 


Yes, human value systems are very messy because a) they must be learned and b) 
they mostly have to do with other humans.  The motivation of tigers, for 
example, is probably very simple and consequently they are never depressed or 
manic.

Just because evolution came 
up with something does not mean it is the best or most efficient way of 
doing things.


But until we know a better way, we can't just assume nature was inefficient.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-25 Thread Stathis Papaioannou



Brent Meeker writes:


Stathis Papaioannou wrote:
 
 
 Brent Meeker writes:
 
  In fact, if we could  reprogram our own minds at will, it would be 
 a very different world.  Suppose you were upset because you lost your 
 job. You might decide to  stay upset to the degree that it remains a 
 motivating factor to look for  other work, but not affect your sleep, 
 ability to experience pleasure,  etc. If you can't find work you 
 might decide to downgrade your  expectations, so that you are just as 
 content having less money or a  menial job, or just as content for 
 the next six months but then have the  motivation to look for 
 interesting work kick in again, but without the  confidence- and 
 enthusiasm-sapping disappointment that comes from  repeated failure 
 to find work.

 I think that's called a cocaine habit. :-)
 
 The difference between happiness that is derived from illicit drugs and 
 happiness derived from real life is that the former does not really 
 last, ending in tolerance, dependence, depression, deterioration in 
 physical health, inability to work and look after oneself, not to 
 mention criminal activity due to the fact that the drugs are illegal. 
 This is because drugs are a very crude way of stimulating the nervous 
 system. It is like programming a computer with a soldering iron. The 
 only time drugs work well is if there is a relatively simple fault, like 
 an excess or deficit of a certain neurotransmitter, and even there you 
 have to be lucky for function to return to normal. 


Which presumes a well-defined normal.

 Changing specific 
 aspects of thinking or emotions without screwing up other functions in 
 the process would require much greater finesse than modern pharmacology 
 can provide, and greater efficacy than psychology can provide.
 David Pearce in The Hedonistic Imperative, and some science fiction 
 writers (Greg Egan, Walter Jon Williams come to mind) have looked at 
 some of the consequences of being able to reprogram your emotions, 
 motivations, memories and personality. 


Larry Niven imagined a future in which you would be able to plug into implanted 
electrodes in your brain and selectively stimulate different areas.  I think this was 
suggested to him by popular articles on finding a pleasure center in rats.


In Ringworld, I believe. But that is the complete antithesis of what I was 
getting at, undifferentiated pleasure which destroys purposeful activity. 
Contrast an opioid like heroin with antidepressants. Heroin has an immediate 
euphoriant effect to which tolerance develops over time, requiring ever-higher 
doses, and apart from the destructive lifestyle due to its illegal status, it damages 
the personality because it is an end in itself, and every other activity and source 
of motivation seems insipid by comparison. Antidepressants have a delayed onset 
of action with no tolerance and drug-seeking behaviour (indeed, many patients 
doubt the association between the drug and clinical improvement for this reason), 
and they do not directly induce euphoria, but increase the motivation and ability 
to experience pleasure in activities which depression takes away. The problem is, 
they only work when a patient is clinically depressed (and even then not all that 
well in many cases), and do nothing if someone is merely unhappy or distressed 
about some aspect of their life. It would not be a desirable thing if there were 
drugs to eliminate ordinary unhappiness, because we need the fear of unhappiness 
as a motivating force: an example of how fixing one problem might create another 
one. But if we had a precise means of adjusting our minds, so that if you did not 
want a certain consequence X you could ensure that that X would not occur, it would 
be a different story. For example, even if you thought that suffering was noble, but 
did not trust yourself with the ability to eliminate suffering, you could simply program 
yourself so that you were no longer tempted to eliminate suffering. 

No-one that I am aware of has 
 explored how utterly alien a world in which we had access to our own 
 source code at the finest level would be. 


I wouldn't download anything from Microsoft!

Brent Meeker
The first time Microsoft makes a product that doesn't suck
will be when they build vacuum cleaners.
  --- Bill Jefferys

 


_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-25 Thread Brent Meeker


Stathis Papaioannou wrote:
...

It would not be a desirable 
thing if there were drugs to eliminate ordinary unhappiness, because we 
need the fear of unhappiness as a motivating force:


And not only fear of unhappiness.  Depression (not the clinical kind) is your 
brain telling you you need to change your life.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-25 Thread Stathis Papaioannou



Brent Meeker writes:

 It would not be a desirable 
 thing if there were drugs to eliminate ordinary unhappiness, because we 
 need the fear of unhappiness as a motivating force:


And not only fear of unhappiness.  Depression (not the clinical kind) is your 
brain telling you you need to change your life.


You need to chamge your life in order to be happy, but what if you could be 
just as happy, at lower cost, without changing your life? You could even 
stipulate as a rule (although I'm not sure it would be necessary) that no 
alteration to increase happiness will be allowed to decrease biological or 
social fitness. 


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-25 Thread chris kirkland


Because undifferentiated pleasure destroys purposeful activity, as
Stathis notes, presumably there is strong selection pressure against it.
If we were naturally uniformly happy, then who would be motivated to
raise children?

What's less clear is whether we'll need to retain ordinary
unhappiness  - or ultimately any kind of unhappiness at all.
Why can't we engineer a motivational system based on heritable 
_gradients_ of immense well-being? Retain the functional analogues of

(some of) our nastier states, but do away with their unpleasant raw
feels. If gradients are conserved, then potentially so too is critical
discernment, appropriate behavioral responses to different stimuli,
and informational sensitivity to a changing environment. On this
scenario, rather than dismantling the hedonic treadmill (cf. heroin
addicts, wireheading, or Huxley's soma), we could genetically
recalibrate the pleasure-pain axis. Hedonic tone could be enriched so
that we all enjoy a higher average hedonic set point across the lifespan.

One can see pitfalls here. Genetically enriching the mesolimbic
dopaminergic system, for instance, might indeed make many people happier
and more motivated.  But if done ineptly, the enhancement might cause
mania or even psychosis. Also, depression/subordinate behavior seems to
have evolved as an adaptation to group-living in social mammals. The
ramifications for human society of abolishing low mood altogether would
be profound and unpredictable. But in principle, a re-designed
motivational system based entirely on (adaptive) gradients of well-being
could make everyone hugely better off.

Idle utopian dreaming? Well, yes, possibly. But I think in the
near-future there will be selection pressure for heritably enriched
hedonic tone. Within the next few decades, we are likely to witness a
revolution of designer babies - and perhaps universal pre-implantation
diagnosis. Prospective parents are going to choose the kind of children
they want to raise. Most prospective parents will presumably choose
(genotypes predisposing to) happy children - since most parents want
their kids to be happy. When human evolution is no longer blind and
random, there will be strong selection pressure against the
genes/allelic combinations that predispose, not just to clinical
depression etc, but to ordinary unhappiness as we understand it today.
Since ordinary unhappiness can still be pretty ghastly, I think this is
a good thing.

Happy Christmas!
:-)
Dave

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-25 Thread Brent Meeker


chris kirkland wrote:


Because undifferentiated pleasure destroys purposeful activity, as
Stathis notes, presumably there is strong selection pressure against it.
If we were naturally uniformly happy, then who would be motivated to
raise children?

What's less clear is whether we'll need to retain ordinary
unhappiness  - or ultimately any kind of unhappiness at all.
Why can't we engineer a motivational system based on heritable 
_gradients_ of immense well-being? Retain the functional analogues of

(some of) our nastier states, but do away with their unpleasant raw
feels. If gradients are conserved, then potentially so too is critical
discernment, appropriate behavioral responses to different stimuli,
and informational sensitivity to a changing environment. On this
scenario, rather than dismantling the hedonic treadmill (cf. heroin
addicts, wireheading, or Huxley's soma), we could genetically
recalibrate the pleasure-pain axis. Hedonic tone could be enriched so
that we all enjoy a higher average hedonic set point across the lifespan.

One can see pitfalls here. Genetically enriching the mesolimbic
dopaminergic system, for instance, might indeed make many people happier
and more motivated.  But if done ineptly, the enhancement might cause
mania or even psychosis. Also, depression/subordinate behavior seems to
have evolved as an adaptation to group-living in social mammals. The
ramifications for human society of abolishing low mood altogether would
be profound and unpredictable. But in principle, a re-designed
motivational system based entirely on (adaptive) gradients of well-being
could make everyone hugely better off.

Idle utopian dreaming? Well, yes, possibly. But I think in the
near-future there will be selection pressure for heritably enriched
hedonic tone. Within the next few decades, we are likely to witness a
revolution of designer babies - and perhaps universal pre-implantation
diagnosis. Prospective parents are going to choose the kind of children
they want to raise. Most prospective parents will presumably choose
(genotypes predisposing to) happy children - since most parents want
their kids to be happy. When human evolution is no longer blind and
random, there will be strong selection pressure against the
genes/allelic combinations that predispose, not just to clinical
depression etc, but to ordinary unhappiness as we understand it today.
Since ordinary unhappiness can still be pretty ghastly, I think this is
a good thing.


Note that we have already bred dogs to be (or at least appear) happier, less 
aggressive, more playful, and more social, than the wolves they descended from. 
 So by conventional selective breeding it can already be done -- which suggests 
that it has already been done.  I wonder if there has been enough time for 
cultural selective breeding to have caused human beings to have cultural 
differences in emotional disposition?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-25 Thread Stathis Papaioannou



Hello Dave/Chris,

I agree with everything you say, and have long admired The Hedonistic Imperative. 
Motivation need not be linked to pain, and for that matter it need not be linked to pleasure 
either. We can imagine an artificial intelligence without any emotions but completely 
dedicated to the pursuit of whatever goals it has been set. It is just a contingent fact 
of evolution that we can experience pleasure and pain.


Having ready-made feelings which destroy the motivation to seek those feelings in the 
normal manner may also be a blessing. Some people seek to harm others because they 
get a special kick from this that they can't get any other way. If they could program 
themselves so that they could get exactly the same effect from fantasising about it, 
then there would be no need to engage in the harmful activity. We could decide in a 
dispassionate manner to leave the positive motivations intact and linked to gradients 
of pleasure, but decouple the negative motivations so that the sadist could still enjoy 
himself but no longer needs to hurt anyone to do so.


Stathis Papaioannou




Date: Tue, 26 Dec 2006 01:06:20 +
From: [EMAIL PROTECTED]
To: everything-list@googlegroups.com
Subject: RE: computer pain


Because undifferentiated pleasure destroys purposeful activity, as
Stathis notes, presumably there is strong selection pressure against it.
If we were naturally uniformly happy, then who would be motivated to
raise children?

What's less clear is whether we'll need to retain ordinary
unhappiness  - or ultimately any kind of unhappiness at all.
Why can't we engineer a motivational system based on heritable 
_gradients_ of immense well-being? Retain the functional analogues of

(some of) our nastier states, but do away with their unpleasant raw
feels. If gradients are conserved, then potentially so too is critical
discernment, appropriate behavioral responses to different stimuli,
and informational sensitivity to a changing environment. On this
scenario, rather than dismantling the hedonic treadmill (cf. heroin
addicts, wireheading, or Huxley's soma), we could genetically
recalibrate the pleasure-pain axis. Hedonic tone could be enriched so
that we all enjoy a higher average hedonic set point across the lifespan.

One can see pitfalls here. Genetically enriching the mesolimbic
dopaminergic system, for instance, might indeed make many people happier
and more motivated.  But if done ineptly, the enhancement might cause
mania or even psychosis. Also, depression/subordinate behavior seems to
have evolved as an adaptation to group-living in social mammals. The
ramifications for human society of abolishing low mood altogether would
be profound and unpredictable. But in principle, a re-designed
motivational system based entirely on (adaptive) gradients of well-being
could make everyone hugely better off.

Idle utopian dreaming? Well, yes, possibly. But I think in the
near-future there will be selection pressure for heritably enriched
hedonic tone. Within the next few decades, we are likely to witness a
revolution of designer babies - and perhaps universal pre-implantation
diagnosis. Prospective parents are going to choose the kind of children
they want to raise. Most prospective parents will presumably choose
(genotypes predisposing to) happy children - since most parents want
their kids to be happy. When human evolution is no longer blind and
random, there will be strong selection pressure against the
genes/allelic combinations that predispose, not just to clinical
depression etc, but to ordinary unhappiness as we understand it today.
Since ordinary unhappiness can still be pretty ghastly, I think this is
a good thing.

Happy Christmas!
 :-)
Dave

 


_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-24 Thread Stathis Papaioannou



Brent Meeker writes:

 If your species doesn't define as unethical that which is contrary to 
 continuation of the species, your species won't be around to long.  
 Our problem is that cultural evolution has been so rapid compared to 
 biological evolution that some of our hardwired values are not so good 
 for continuation of our (and many other) species.  I don't think 
 ethics is a matter of definitions; that's like trying to fly by 
 settling on a definition of airplane.  But looking at the long run 
 survival of the species might produce some good ethical rules; 
 particularly if we could predict the future consequences clearly.
 
 If slavery could be scientifically shown to promote the well-being of 
 the species as a whole does that mean we should have slavery? Does it 
 mean that slavery is good?


Note that I didn't say promote the well-being; I said contrary to the 
continuation.  If the species could not continue without slavery, then there are two possible 
futures.  In one of them there's a species that thinks slavery is OK - in the other there is no 
opinion on the subject.


OK, but it is possible to have an ethical system contrary to the continuation of the 
species as well. There are probably peopel in the world today who think that humans 
should deliberately stop breeding and die out because their continued existence is 
detrimental to the survival of other species on the planet. If you point out to them 
that such a policy is contrary to evolution (if contrary to evolution is possible) or 
whatever, they might agree with you, but still insist that quietly dying out is the good 
and noble thing to do. They have certain values with a certain end in mind, and their 
ethical system is perfectly reasonable in that context. That most of us consider it foolish 
and do not want to adopt it does not mean that there is a flaw in the logic or in the 
empirical facts. 

Words like irrational are sometimes used imprecisely. Someone who decides to jump 
off a tall building might be called irrational on the basis of that information alone. If he 
does it because he believes he is superman and able to fly then he is irrational: he is 
not superman and he will punge to his death. If he does it because he wants to kill 
himself then he is not irrational, because jumping off a tall enough building is a perfectly 
reasonable means towards this end. We might try equally hard in each case to dissuade 
him from jumping, but the approach would be different because the underlying thought 
processes are different.


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-24 Thread Brent Meeker


Stathis Papaioannou wrote:



Brent Meeker writes:

 If your species doesn't define as unethical that which is contrary 
to  continuation of the species, your species won't be around to 
long.   Our problem is that cultural evolution has been so rapid 
compared to  biological evolution that some of our hardwired values 
are not so good  for continuation of our (and many other) species.  
I don't think  ethics is a matter of definitions; that's like trying 
to fly by  settling on a definition of airplane.  But looking at 
the long run  survival of the species might produce some good 
ethical rules;  particularly if we could predict the future 
consequences clearly.
  If slavery could be scientifically shown to promote the well-being 
of  the species as a whole does that mean we should have slavery? 
Does it  mean that slavery is good?


Note that I didn't say promote the well-being; I said contrary to 
the continuation.  If the species could not continue without slavery, 
then there are two possible futures.  In one of them there's a species 
that thinks slavery is OK - in the other there is no opinion on the 
subject.


OK, but it is possible to have an ethical system contrary to the 
continuation of the species as well. There are probably peopel in the 
world today who think that humans should deliberately stop breeding and 
die out because their continued existence is detrimental to the survival 
of other species on the planet. If you point out to them that such a 
policy is contrary to evolution (if contrary to evolution is possible) 
or whatever, they might agree with you, but still insist that quietly 
dying out is the good and noble thing to do. They have certain values 
with a certain end in mind, and their ethical system is perfectly 
reasonable in that context. That most of us consider it foolish and do 
not want to adopt it does not mean that there is a flaw in the logic or 
in the empirical facts.


Right.  

Words like irrational are sometimes used imprecisely. Someone who 
decides to jump off a tall building might be called irrational on the 
basis of that information alone. If he does it because he believes he is 
superman and able to fly then he is irrational: he is not superman and 
he will punge to his death. If he does it because he wants to kill 
himself then he is not irrational, because jumping off a tall enough 
building is a perfectly reasonable means towards this end. We might try 
equally hard in each case to dissuade him from jumping, but the approach 
would be different because the underlying thought processes are different.


I don't disagree.  I'm just pointing out that values contrary to continuation 
of the species are not likely to be among the basic hardwired values of any 
species.  Those conducive to continuation probably will be - with allowance for 
changes of circumstance rapidly compared to biological evolution.  So values in 
an evolved species are, on the whole, not just free floating, independent of 
facts.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-24 Thread Stathis Papaioannou



Brent Meeker writes:


Stathis Papaioannou wrote:
 
 
 Brent Meeker writes:
 
  If your species doesn't define as unethical that which is contrary 
 to  continuation of the species, your species won't be around to 
 long.   Our problem is that cultural evolution has been so rapid 
 compared to  biological evolution that some of our hardwired values 
 are not so good  for continuation of our (and many other) species.  
 I don't think  ethics is a matter of definitions; that's like trying 
 to fly by  settling on a definition of airplane.  But looking at 
 the long run  survival of the species might produce some good 
 ethical rules;  particularly if we could predict the future 
 consequences clearly.
   If slavery could be scientifically shown to promote the well-being 
 of  the species as a whole does that mean we should have slavery? 
 Does it  mean that slavery is good?


 Note that I didn't say promote the well-being; I said contrary to 
 the continuation.  If the species could not continue without slavery, 
 then there are two possible futures.  In one of them there's a species 
 that thinks slavery is OK - in the other there is no opinion on the 
 subject.
 
 OK, but it is possible to have an ethical system contrary to the 
 continuation of the species as well. There are probably peopel in the 
 world today who think that humans should deliberately stop breeding and 
 die out because their continued existence is detrimental to the survival 
 of other species on the planet. If you point out to them that such a 
 policy is contrary to evolution (if contrary to evolution is possible) 
 or whatever, they might agree with you, but still insist that quietly 
 dying out is the good and noble thing to do. They have certain values 
 with a certain end in mind, and their ethical system is perfectly 
 reasonable in that context. That most of us consider it foolish and do 
 not want to adopt it does not mean that there is a flaw in the logic or 
 in the empirical facts.


Right.  

 Words like irrational are sometimes used imprecisely. Someone who 
 decides to jump off a tall building might be called irrational on the 
 basis of that information alone. If he does it because he believes he is 
 superman and able to fly then he is irrational: he is not superman and 
 he will punge to his death. If he does it because he wants to kill 
 himself then he is not irrational, because jumping off a tall enough 
 building is a perfectly reasonable means towards this end. We might try 
 equally hard in each case to dissuade him from jumping, but the approach 
 would be different because the underlying thought processes are different.


I don't disagree.  I'm just pointing out that values contrary to continuation 
of the species are not likely to be among the basic hardwired values of any 
species.  Those conducive to continuation probably will be - with allowance for 
changes of circumstance rapidly compared to biological evolution.  So values in 
an evolved species are, on the whole, not just free floating, independent of 
facts.


The facts show us why as a society we have the sorts of values we do, but 
they do not provide justification for why we should or shouldn't have certain 
values, like a sort of replacement for Moses' stone tablets. 


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-24 Thread Bruno Marchal



Le 24-déc.-06, à 09:17, Stathis Papaioannou a écrit :




Brent Meeker writes:

 If your species doesn't define as unethical that which is contrary 
to  continuation of the species, your species won't be around to 
long.   Our problem is that cultural evolution has been so rapid 
compared to  biological evolution that some of our hardwired values 
are not so good  for continuation of our (and many other) species.  
I don't think  ethics is a matter of definitions; that's like 
trying to fly by  settling on a definition of airplane.  But 
looking at the long run  survival of the species might produce some 
good ethical rules;  particularly if we could predict the future 
consequences clearly.
  If slavery could be scientifically shown to promote the 
well-being of  the species as a whole does that mean we should have 
slavery? Does it  mean that slavery is good?
Note that I didn't say promote the well-being; I said contrary to 
the continuation.  If the species could not continue without 
slavery, then there are two possible futures.  In one of them there's 
a species that thinks slavery is OK - in the other there is no 
opinion on the subject.


OK, but it is possible to have an ethical system contrary to the 
continuation of the species as well. There are probably peopel in the 
world today who think that humans should deliberately stop breeding 
and die out because their continued existence is detrimental to the 
survival of other species on the planet. If you point out to them that 
such a policy is contrary to evolution (if contrary to evolution is 
possible) or whatever, they might agree with you, but still insist 
that quietly dying out is the good and noble thing to do. They have 
certain values with a certain end in mind, and their ethical system is 
perfectly reasonable in that context. That most of us consider it 
foolish and do not want to adopt it does not mean that there is a flaw 
in the logic or in the empirical facts.
Words like irrational are sometimes used imprecisely. Someone who 
decides to jump off a tall building might be called irrational on the 
basis of that information alone. If he does it because he believes he 
is superman and able to fly then he is irrational: he is not superman 
and he will punge to his death. If he does it because he wants to kill 
himself then he is not irrational, because jumping off a tall enough 
building is a perfectly reasonable means towards this end.


Unless Quantum Mechanics is correct.
Unless the comp hyp. is correct. (OK this does not invalidate per se 
your argumentation).



We might try equally hard in each case to dissuade him from jumping, 
but the approach would be different because the underlying thought 
processes are different.


OK,

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-24 Thread Jef Allbright


Stathis Papaioannou wrote:
Oops, it was Jef Allbright, not Mark Peaty responsible for 
the first quote below.



Brent Meeker writes:

[Mark Peaty]Correction: [Jef Allbright]
From the foregoing it can be seen that while there can be 
no objective morality, nor any absolute morality, it is 
reasonable to expect increasing agreement on the relative

morality of actions within an expanding context.  Further,
similar to the entropic arrow of time, we can conceive of
an arrow of morality corresponding to the ratcheting
forward of an increasingly broad context of shared values
(survivors of coevolutionary competition) promoted via
awareness of increasingly effective principles of
interaction (scientific knowledge of what works, extracted
from regularities in the environment.)



[Stathis Papaioannou]
What if the ratcheting forward of shared values is at odds 
with evolutionary expediency, i.e. there is some unethical 
policy that improves the fitness of the species? To avoid

such a dilemna you would have define as ethical everything
improves the fitness of the species, and I'm not sure you
want to do that.


If your species doesn't define as unethical that which is 
contrary to continuation of the species, your species won't 
be around to long.  Our problem is that cultural evolution 
has been so rapid compared to biological evolution that some 
of our hardwired values are not so good for continuation of 
our (and many other) species.  I don't think ethics is a 
matter of definitions; that's like trying to fly by settling 
on a definition of airplane.  But looking at the long run 
survival of the species might produce some good ethical 
rules; particularly if we could predict the future

consequences clearly.


If slavery could be scientifically shown to promote the 
well-being of the species as a whole does that mean we

should have slavery? Does it mean that slavery is good?


Teaching that slavery is bad is similar to teaching that lying is
bad.  In each case it's a narrow over-simplification of a more general
principle of what works. Children are taught simplified modes of moral
reasoning to match their smaller context of understanding. At one end of
a moral scale are the moral instincts (experienced as pride, disgust,
etc.) that are an even more condensed form of knowledge of what worked
in the environment of evolutionary adaptation. Further up the scale are
cultural--including religious--laws and even the patterns of our
language that further codify and reinforce patterns of interaction that
worked well enough and broadly enough to be taken as principles of
right action. 


Relatively few of us take the leap beyond the morality that was
inherited or given to us, to grasp the broader and more extensible
understanding of morality as patterns of behavior assessed as promoting
increasingly shared values over increasing scope. Society discourages
individual thinking about what is and what is not moral; indeed, it is a
defining characteristic that moral principles subsume both narrow self
interest and narrow situational awareness.  For this reason, one can not
assess the absolute morality of an action in isolation, but we can
legitimately speak of the relative morality of a class of behavior
within context.

Just as lying can clearly be the right action within a specific context
(imagine having one's home invaded and being unable, on moral grounds,
to lie to the invaders about where the children are hiding!), the moral
issue of slavery can be effectively understood only within a larger
context. 


The practice of slavery (within a specific context) can be beneficial to
society; numerous examples exist of slavery contributing to the economic
good of a locale, and on a grander scale, the development of western
philosophy (including democracy!) as a result of freeing some from the
drudgery of manual labor and creating an environment conducive to deeper
thought.  And as we seek to elucidate a general principle regarding
slavery, we come face-to-face with other instances of this class of
problem, including rights of women to vote, the moral standing of
sentient beings of various degrees of awareness (farm animals, the great
apes, artificial intelligences), and even the idea that all men, of
disparate mental or emotional capability, are created equal?  Could
there be a principle constituting a coherent positive-sum stance toward
issues of moral interaction between agents of inherently different
awareness and capabilities?

Are we as a society yet ready to adopt a higher level of social
decision-making, moral to the extent that it effectively promotes
increasingly shared values over increasing scope, one that provides an
increasingly clear vision of effective interaction between agents of
diverse and varying capabilities, or are going to hold tightly to the
previous best model, one that comfortingly but childishly insists on the
fiction of some form of strict equality between agents?  Are we mature
enough to see 

Re: computer pain

2006-12-24 Thread Brent Meeker


Stathis Papaioannou wrote:



Jef Allbright writes:

[Stathis Papaioannou]
If slavery could be scientifically shown to promote the well-being of 
the species as a whole does that mean we

should have slavery? Does it mean that slavery is good?


Teaching that slavery is bad is similar to teaching that lying is
bad.  In each case it's a narrow over-simplification of a more general
principle of what works. Children are taught simplified modes of moral
reasoning to match their smaller context of understanding. At one end of
a moral scale are the moral instincts (experienced as pride, disgust,
etc.) that are an even more condensed form of knowledge of what worked
in the environment of evolutionary adaptation. Further up the scale are
cultural--including religious--laws and even the patterns of our
language that further codify and reinforce patterns of interaction that
worked well enough and broadly enough to be taken as principles of
right action.
Relatively few of us take the leap beyond the morality that was
inherited or given to us, to grasp the broader and more extensible
understanding of morality as patterns of behavior assessed as promoting
increasingly shared values over increasing scope. Society discourages
individual thinking about what is and what is not moral; indeed, it is a
defining characteristic that moral principles subsume both narrow self
interest and narrow situational awareness.  For this reason, one can not
assess the absolute morality of an action in isolation, but we can
legitimately speak of the relative morality of a class of behavior
within context.

Just as lying can clearly be the right action within a specific context
(imagine having one's home invaded and being unable, on moral grounds,
to lie to the invaders about where the children are hiding!), the moral
issue of slavery can be effectively understood only within a larger
context.
The practice of slavery (within a specific context) can be beneficial to
society; numerous examples exist of slavery contributing to the economic
good of a locale, and on a grander scale, the development of western
philosophy (including democracy!) as a result of freeing some from the
drudgery of manual labor and creating an environment conducive to deeper
thought.  And as we seek to elucidate a general principle regarding
slavery, we come face-to-face with other instances of this class of
problem, including rights of women to vote, the moral standing of
sentient beings of various degrees of awareness (farm animals, the great
apes, artificial intelligences), and even the idea that all men, of
disparate mental or emotional capability, are created equal?  Could
there be a principle constituting a coherent positive-sum stance toward
issues of moral interaction between agents of inherently different
awareness and capabilities?

Are we as a society yet ready to adopt a higher level of social
decision-making, moral to the extent that it effectively promotes
increasingly shared values over increasing scope, one that provides an
increasingly clear vision of effective interaction between agents of
diverse and varying capabilities, or are going to hold tightly to the
previous best model, one that comfortingly but childishly insists on the
fiction of some form of strict equality between agents?  Are we mature
enough to see that just at the point in human progress where
technological development (biotech, nanotech, AI) threatens to
drastically disrupt that which we value, we are gaining the necessary
tools to organize at a higher level--effectively a higher level of
wisdom?


Well, I think slavery is bad, even if it does help society - unless we 
were actually in danger of extiction without it or something. So yes, 
the moral rules must bend in the face of changing circumstances, but the 
point at which they bend will be different for each individual, and 
there is no objective way to define what this point would or should be.


Slightly off topic, I don't see why we would design AI's to experience 
emotions such as resentment, anger, fear, pain etc. 


John McCarthy says in his essay, Making Robots Conscious of their Mental 
States
http://www-formal.stanford.edu/jmc/consciousness/consciousness.html

In fact, if we could 
reprogram our own minds at will, it would be a very different world. 
Suppose you were upset because you lost your job. You might decide to 
stay upset to the degree that it remains a motivating factor to look for 
other work, but not affect your sleep, ability to experience pleasure, 
etc. If you can't find work you might decide to downgrade your 
expectations, so that you are just as content having less money or a 
menial job, or just as content for the next six months but then have the 
motivation to look for interesting work kick in again, but without the 
confidence- and enthusiasm-sapping disappointment that comes from 
repeated failure to find work. 


I think that's called a cocaine habit. :-)

Brent Meeker


Re: computer pain

2006-12-24 Thread Brent Meeker


Stathis Papaioannou wrote:



Jef Allbright writes:

[Stathis Papaioannou]
If slavery could be scientifically shown to promote the well-being of 
the species as a whole does that mean we

should have slavery? Does it mean that slavery is good?


Teaching that slavery is bad is similar to teaching that lying is
bad.  In each case it's a narrow over-simplification of a more general
principle of what works. Children are taught simplified modes of moral
reasoning to match their smaller context of understanding. At one end of
a moral scale are the moral instincts (experienced as pride, disgust,
etc.) that are an even more condensed form of knowledge of what worked
in the environment of evolutionary adaptation. Further up the scale are
cultural--including religious--laws and even the patterns of our
language that further codify and reinforce patterns of interaction that
worked well enough and broadly enough to be taken as principles of
right action.
Relatively few of us take the leap beyond the morality that was
inherited or given to us, to grasp the broader and more extensible
understanding of morality as patterns of behavior assessed as promoting
increasingly shared values over increasing scope. Society discourages
individual thinking about what is and what is not moral; indeed, it is a
defining characteristic that moral principles subsume both narrow self
interest and narrow situational awareness.  For this reason, one can not
assess the absolute morality of an action in isolation, but we can
legitimately speak of the relative morality of a class of behavior
within context.

Just as lying can clearly be the right action within a specific context
(imagine having one's home invaded and being unable, on moral grounds,
to lie to the invaders about where the children are hiding!), the moral
issue of slavery can be effectively understood only within a larger
context.
The practice of slavery (within a specific context) can be beneficial to
society; numerous examples exist of slavery contributing to the economic
good of a locale, and on a grander scale, the development of western
philosophy (including democracy!) as a result of freeing some from the
drudgery of manual labor and creating an environment conducive to deeper
thought.  And as we seek to elucidate a general principle regarding
slavery, we come face-to-face with other instances of this class of
problem, including rights of women to vote, the moral standing of
sentient beings of various degrees of awareness (farm animals, the great
apes, artificial intelligences), and even the idea that all men, of
disparate mental or emotional capability, are created equal?  Could
there be a principle constituting a coherent positive-sum stance toward
issues of moral interaction between agents of inherently different
awareness and capabilities?

Are we as a society yet ready to adopt a higher level of social
decision-making, moral to the extent that it effectively promotes
increasingly shared values over increasing scope, one that provides an
increasingly clear vision of effective interaction between agents of
diverse and varying capabilities, or are going to hold tightly to the
previous best model, one that comfortingly but childishly insists on the
fiction of some form of strict equality between agents?  Are we mature
enough to see that just at the point in human progress where
technological development (biotech, nanotech, AI) threatens to
drastically disrupt that which we value, we are gaining the necessary
tools to organize at a higher level--effectively a higher level of
wisdom?


Well, I think slavery is bad, even if it does help society - unless we 
were actually in danger of extiction without it or something. 


Slavery is bad almost by defintion.  It consists in treating beings we empathize with as though we had no empathy. 

So yes, 
the moral rules must bend in the face of changing circumstances, but the 
point at which they bend will be different for each individual, and 
there is no objective way to define what this point would or should be.


Slightly off topic, I don't see why we would design AI's to experience 
emotions such as resentment, anger, fear, pain etc. 


John McCarthy says in his essay, Making Robots Conscious of their Mental 
States
http://www-formal.stanford.edu/jmc/consciousness/consciousness.html

In fact, if we could 
reprogram our own minds at will, it would be a very different world. 


Better living through chemistry!

Suppose you were upset because you lost your job. You might decide to 
stay upset to the degree that it remains a motivating factor to look for 
other work, but not affect your sleep, ability to experience pleasure, 
etc. If you can't find work you might decide to downgrade your 
expectations, so that you are just as content having less money or a 
menial job, or just as content for the next six months but then have the 
motivation to look for interesting work kick in again, but without the 
confidence- and 

RE: computer pain

2006-12-24 Thread Jef Allbright


Stathis Papaioannou wrote:


Jef Allbright writes:

[Stathis Papaioannou]
 If slavery could be scientifically shown to promote the 
well-being of 
 the species as a whole does that mean we should have 
slavery? Does it 
 mean that slavery is good?
 

Teaching that slavery is bad is similar to teaching
that lying is bad.  In each case it's a narrow
over-simplification of a more general principle of what
works. Children are taught simplified modes of moral
reasoning to match their smaller context of 
understanding. At one end of a moral scale are the

moral instincts (experienced as pride, disgust, etc.)
that are an even more condensed form of knowledge of
what worked in the environment of evolutionary adaptation. 
Further up the scale are cultural--including religious--laws

and even the patterns of our language that further codify
and reinforce patterns of interaction that worked well
enough and broadly enough to be taken as principles 
of right action.


Relatively few of us take the leap beyond the morality
that was inherited or given to us, to grasp the broader
and more extensible understanding of morality as patterns
of behavior assessed as promoting increasingly shared values
over increasing scope. Society discourages individual
thinking about what is and what is not moral; indeed, it is
a defining characteristic that moral principles subsume 
both narrow self interest and narrow situational awareness. 
For this reason, one can not assess the absolute morality

of an action in isolation, but we can legitimately speak of
the relative morality of a class of behavior within context.

Just as lying can clearly be the right action within a
specific context (imagine having one's home invaded and
being unable, on moral grounds, to lie to the invaders about
where the children are hiding!), the moral issue of slavery
can be effectively understood only within a larger context.

The practice of slavery (within a specific context) can be 
beneficial to society; numerous examples exist of slavery

contributing to the economic good of a locale, and on a
grander scale, the development of western philosophy
(including democracy!) as a result of freeing some from
the drudgery of manual labor and creating an environment 
conducive to deeper thought.  And as we seek to elucidate

a general principle regarding slavery, we come face-to-face
with other instances of this class of problem, including
rights of women to vote, the moral standing of sentient
beings of various degrees of awareness (farm animals, the
great apes, artificial intelligences), and even the idea 
that all men, of disparate mental and emotional capability,

are created equal?  Could there be a principle constituting
a coherent positive-sum stance toward issues of moral
interaction between agents of inherently different awareness
and capabilities?

Are we as a society yet ready to adopt a higher level of social 
decision-making, moral to the extent that it effectively

promotes increasingly shared values over increasing scope, one
that provides an increasingly clear vision of effective
interaction between agents of diverse and varying capabilities,
or are going to hold tightly to the previous best model, one
that comfortingly but childishly insists on the fiction of some
form of strict equality between agents?  Are we  mature enough
to see that just at the point in human progress where 
technological development (biotech, nanotech, AI) threatens to 
drastically disrupt that which we value, we are gaining the 
necessary tools to organize at a higher level--effectively a

higher level of wisdom?


Well, I think slavery is bad, even if it does help society - 
unless we were actually in danger of extiction without it or 
something. So yes, the moral rules must bend in the face of 
changing circumstances, but the point at which they bend will 
be different for each individual, and there is no objective 
way to define what this point would or should be.


I thought you and I had already clearly agreed that there can be no
absolute or objective morality, since moral judgments are based on
subjective values.  And I thought we had already moved on to discussion
of how agents do in fact hold a good portion of their subjective values
in common, due to common environment, culture and  evolutionary
heritage.  In my opinion, the discussion begins to get interesting from
this point, because the population tends to converge on agreement as to
general principles of effective interaction, while tending to diverge on
matters of individual interests and preferences.

Please notice that I don't say that slavery *is* immoral, because as you
well know there's no objective basis for that claim. But I do say that
people will increasingly agree in their assessment that it is highly
immoral.  Their *statements* are objective facts, and measurements of
the degree of agreement are objective facts, and on this basis I claim
that we can implement an improved form of social 

RE: computer pain

2006-12-24 Thread Stathis Papaioannou



Brent Meeker writes:

 In fact, if we could 
 reprogram our own minds at will, it would be a very different world. 
 Suppose you were upset because you lost your job. You might decide to 
 stay upset to the degree that it remains a motivating factor to look for 
 other work, but not affect your sleep, ability to experience pleasure, 
 etc. If you can't find work you might decide to downgrade your 
 expectations, so that you are just as content having less money or a 
 menial job, or just as content for the next six months but then have the 
 motivation to look for interesting work kick in again, but without the 
 confidence- and enthusiasm-sapping disappointment that comes from 
 repeated failure to find work. 


I think that's called a cocaine habit. :-)


The difference between happiness that is derived from illicit drugs and happiness 
derived from real life is that the former does not really last, ending in tolerance, 
dependence, depression, deterioration in physical health, inability to work and 
look after oneself, not to mention criminal activity due to the fact that the drugs 
are illegal. This is because drugs are a very crude way of stimulating the nervous 
system. It is like programming a computer with a soldering iron. The only time drugs 
work well is if there is a relatively simple fault, like an excess or deficit of a certain 
neurotransmitter, and even there you have to be lucky for function to return to 
normal. Changing specific aspects of thinking or emotions without screwing up 
other functions in the process would require much greater finesse than modern 
pharmacology can provide, and greater efficacy than psychology can provide. 

David Pearce in The Hedonistic Imperative, and some science fiction writers (Greg 
Egan, Walter Jon Williams come to mind) have looked at some of the consequences 
of being able to reprogram your emotions, motivations, memories and personality. 
No-one that I am aware of has explored how utterly alien a world in which we had 
access to our own source code at the finest level would be. Perhaps that is one of 
the things that would happen at the Vingean Singularity.


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-24 Thread Brent Meeker


Stathis Papaioannou wrote:



Brent Meeker writes:

 In fact, if we could  reprogram our own minds at will, it would be 
a very different world.  Suppose you were upset because you lost your 
job. You might decide to  stay upset to the degree that it remains a 
motivating factor to look for  other work, but not affect your sleep, 
ability to experience pleasure,  etc. If you can't find work you 
might decide to downgrade your  expectations, so that you are just as 
content having less money or a  menial job, or just as content for 
the next six months but then have the  motivation to look for 
interesting work kick in again, but without the  confidence- and 
enthusiasm-sapping disappointment that comes from  repeated failure 
to find work.

I think that's called a cocaine habit. :-)


The difference between happiness that is derived from illicit drugs and 
happiness derived from real life is that the former does not really 
last, ending in tolerance, dependence, depression, deterioration in 
physical health, inability to work and look after oneself, not to 
mention criminal activity due to the fact that the drugs are illegal. 
This is because drugs are a very crude way of stimulating the nervous 
system. It is like programming a computer with a soldering iron. The 
only time drugs work well is if there is a relatively simple fault, like 
an excess or deficit of a certain neurotransmitter, and even there you 
have to be lucky for function to return to normal. 


Which presumes a well-defined normal.

Changing specific 
aspects of thinking or emotions without screwing up other functions in 
the process would require much greater finesse than modern pharmacology 
can provide, and greater efficacy than psychology can provide.
David Pearce in The Hedonistic Imperative, and some science fiction 
writers (Greg Egan, Walter Jon Williams come to mind) have looked at 
some of the consequences of being able to reprogram your emotions, 
motivations, memories and personality. 


Larry Niven imagined a future in which you would be able to plug into implanted 
electrodes in your brain and selectively stimulate different areas.  I think this was 
suggested to him by popular articles on finding a pleasure center in rats.

No-one that I am aware of has 
explored how utterly alien a world in which we had access to our own 
source code at the finest level would be. 


I wouldn't download anything from Microsoft!

Brent Meeker
The first time Microsoft makes a product that doesn't suck
will be when they build vacuum cleaners.
 --- Bill Jefferys

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-23 Thread Stathis Papaioannou



John Mikes writes:


Stathis,
 your 'augmentded' ethical maxim is excellent, I could add some more 'except 
foe'-s to it.
(lower class, cast, or wealth, - language, - gender, etc.)
The last par, however, is prone to a more serious remark of mine:
topics like you sampled are culture related prejudicial beief-items. Research 
cannot
solve them, because research is also ADJUSTED TO THE CULTURE  it serves.
A valid medeval research on the number of angels on a pin-tip would not hold in
today's belief-topic of curved space. (Curved angels?)
Mey Christmas to you, too
John


I think the culture-independence test is actually a good test for whether something truly is 
part of science. How to build a nuclear bomb is culture-independent - it won't work if you 
decide to use U-328 just because there is more of it available where you live, for example. 
But whether and how to use the finished weapon is not a question that science can answer, 
although of course it is a question that scientists should ask and apply their own culture-

-dependent values to.

And a merry Christmas to you too, John

Stathis Papaionnou


On 12/21/06, Stathis Papaioannou  [EMAIL PROTECTED]mailto:[EMAIL PROTECTED] 
wrote:
Peter Jones writes:
  Perhaps none of the participants in this thread really disagree. Let me see 
if I
  can summarise:
 
  Individuals and societies have arrived at ethical beliefs for a reason, 
whether that be
  evolution, what their parents taught them, or what it says in a book 
believed to be divinely
  inspired. Perhaps all of these reasons can be subsumed under evolution if 
that term can
  be extended beyond genetics to include all the ideas, beliefs, customs etc. 
that help a
  society to survive and propagate itself. Now, we can take this and 
formalise it in some way
  so that we can discuss ethical questions rationally:
 
  Murder is bad because it reduces the net happiness in society - 
Utilitarianism
 
  Murder is bed because it breaks the sixth commandment - Judaism and 
Christianity
  (interesting that this only no. 6 on a list of 10: God knows his priorities)
 
  Ethics then becomes objective, given the rules. The meta-ethical 
explanation of evolution,
  broadly understood, as generating the various ethical systems is also 
objective. However,
  it is possible for someone at the bottom of the heap to go over the head of 
utilitarianism,
  evolution, even God and say:
 
  Why should murder be bad? I don't care about the greatest good for the 
greatest number,
  I don't care if the species dies out, and I think God is a bastard and will 
shout it from hell if
  sends me there for killing people for fun and profit. This is my own 
personal ethical belief,
  and you can't tell me I'm wrong!
 
  And the psychopath is right: no-one can actually fault him on a point of 
fact or a point of
  logic.

 The psychopath is wrong. He doesn't want to be murdered, but
 he wants to murder. His ethical rule is therefore inconsistent and
 not
 really ethical at all.
Who says his ethical rule is inconsistent? If he made the claim do unto others 
as you would have
others do unto you he would be inconsistent, but he makes no such claim. 
Billions of people have
lived and died in societies where it is perfectly ethical and acceptable to 
kill inferior races or inferior
species. If they accept some version of the edict you have just elevated to a 
self-evident truth it
would be do unto others as you would have them do unto you, unless they are 
foreigners, or taste
good to eat, or worship different gods. Perfectly consistent, even if horrible.
   In the *final* analysis, ethical beliefs are not a matter of fact or 
logic, and if it seems
  that they are then there is a hidden assumption somewhere.

 Everything starts with assumptions. The questions is whether they
 are correct.  A lunatic could try defining 2+2=5 as valid, but
 he will soon run into inconsistencies. That is why we reject
 2+2=5. Ethical rules must apply to everybody as a matter of
 definition. Definitions supply correct assumptions.
So you think arguments about such matters as abortion, capital punishment and 
what sort of
social welfare system we should have are just like arguments about mathematics 
or geology,
and with enough research there should be universal agreement?
Stathis Papaioannou


_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-23 Thread Brent Meeker


John Mikes wrote:

Brent:
let me start at the end:
So why don't you believe it?
because I am prejudiced by the brainwashing I got in 101 science education,
the 'conventional' thinking of the (ongoing) science establishment - still
brainwashing the upcoming scientist-generations with the same '101' -
(which is also an answer to your 'conventional' quest:)


It seems your answer is that it's just a convention that you happen to have 
learned - a mere artifact of culture as propounded by various post-modernists.



Unconventional is a lot on this list many of them to my liking (personal!)
and seemingly to yours, too.

I leave it to the conventional(G) scientists to agree whether the Earth
is spherical (if it IS?) and used this example from the precedent texts 
just

as an 'unconventional' variant thinking.
We (all, I suppose) are under a lot of influence from the 101 sciences and
my point was exactly to raise another possibility (absurd as it may be).


We are influenced by it because it has been very successful.  I don't fly in 
airplanes designed by alternative engineering.  Unconventional ideas interest 
me only in so far as they work as well or better than conventional ones.

Brent Meeker
They laughed at Bozo the Clown too.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-23 Thread John Mikes

Brent:

Brent:
It seems your answer is that it's just a convention that you happen to have
learned - a mere artifact of culture as propounded by various
post-modernists.
JM:
In our culture and its predecessors primitive observations led to
explanations at the level of the then epistemic cognitive inventory, which
increases continually. Those simplistic ideas were retained and mended in
later 'science' of better learning epochs but the basis has not changed: a
physical view of facts and the way we still do value them as our
reality.  Composed into 'models' (groups) of their own.
If you call a newer look - based upon newer epistemic enrichment - as
propounded  by various post-modernists - well, so be it, I don't consider
it pejorative.

Brent (on the 101):
We are influenced by it because it has been very successful.  I don't fly in
airplanes designed by alternative engineering.  Unconventional ideas
interest me only in so far as they work as well or better than conventional
ones.
JM:
I think I hear a mix up of the only one we have for the best one there
is. Your airplanes fall off the sky sometimes (spare me the statistics,
please, 1 is more than enough) contraceptives fail, houses burn down for
electric failures, there are wars in the conventionally based social
culture, our biosphere is going berserk because of our perfect scientific
applications, we have technology-related diseases, nuclear fall outs,
medical mishaps, politicians (oops) and food poisoning, bridge collapses and
other innumerable examples of safety failures of our '101'-based perfect
efficiency in this world. We live (and use) the ONLY one we have. Our
pretension lists the benefits and deems it the best. For the caveman the
BEST weapon was the hand-ax. For a priest the best science is HIS theology.
For a monotheist the best god is his god.
If you are not interested in the 'unconventional novelties' before they
prove to be superior then our ongoing ignorance, you will never get to them.
I go for it, not necessarily successful, but I try. And don't give up.
I don't believe that you want to stay put in the science-religion of our
axioms, emergence, 'givens', chaotic paradoxical beliefs and the
cosmologists' Big Bang narrative. And all the other marvels based on '101'.
Atoms, molecules, spin, energy, space, time, mass, gravitation, electricity,
light, life, mind, etc. just to name some.
They all are usable tools for some practical tasks as long as we have no
better ones
to use  and explanation for them.
In the meantime have a happy new year

John M



On 12/23/06, Brent Meeker [EMAIL PROTECTED] wrote:



John Mikes wrote:
 Brent:
 let me start at the end:
 So why don't you believe it?
 because I am prejudiced by the brainwashing I got in 101 science
education,
 the 'conventional' thinking of the (ongoing) science establishment -
still
 brainwashing the upcoming scientist-generations with the same '101' -
 (which is also an answer to your 'conventional' quest:)

It seems your answer is that it's just a convention that you happen to
have learned - a mere artifact of culture as propounded by various
post-modernists.


 Unconventional is a lot on this list many of them to my liking
(personal!)
 and seemingly to yours, too.

 I leave it to the conventional(G) scientists to agree whether the
Earth
 is spherical (if it IS?) and used this example from the precedent texts
 just
 as an 'unconventional' variant thinking.
 We (all, I suppose) are under a lot of influence from the 101 sciences
and
 my point was exactly to raise another possibility (absurd as it may be).

We are influenced by it because it has been very successful.  I don't fly
in airplanes designed by alternative engineering.  Unconventional ideas
interest me only in so far as they work as well or better than conventional
ones.

Brent Meeker
They laughed at Bozo the Clown too.






--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-23 Thread Stathis Papaioannou







Mark Peaty writes:



Sorry to be so slow at responding here but life [domestic], the universe and 
everything else right now is competing savagely with this interesting 
discussion. [But one must always think positive; 'Bah, Humbug!' is not 
appropriate, even though the temptation is great some times :-]
Stathis,
I am not entirely convinced when you say: 'And the psychopath is right: no-one 
can actually fault him on a point of fact or a point of logic'
That would only be right if we allowed that his [psychopathy is mostly a male 
affliction I believe] use of words is easily as reasonable as yours or mine. 
However, where the said psycho. is purporting to make authoritative statements 
about the world, it is not OK for him to purport that what he describes is 
unquestionably factual and his reasoning from the facts as he sees them is 
necessarily authoritative for anyone else. This is because, qua psychopath, he 
is not able to make the fullest possible free decisions about what makes people 
tick or even about what is reality for the rest of us. He is, in a sense, 
mortally wounded, and forever impaired; condemned always to make only 'logical' 
decisions. :-)
The way I see it, roughly and readily, is that there are in fact certain 
statements/descriptions about the world and our place in it which are MUCH MORE 
REASONABLE than a whole lot of others. I think therefore that, even though you 
might be right from a 'purely logical' point of view when you say the 
following: 'In the *final* analysis, ethical beliefs are not a matter of fact 
or logic, and if it seems that they are then there is a hidden assumption 
somewhere'
in fact, from the point of view of practical living and the necessities of 
survival, the correct approach is to assert what amounts to a set of practical 
axioms, including:
 *   the mere fact of existence is the basis of value, that good and bad are 
expressed differently within - and between - different cultures and their 
sub-cultures but ultimately there is an objective, absolute basis for the 
concept of 'goodness', because in all normal circumstances it is better to 
exist than not to exist,
 *   related to this and arising out of it is the realisation that all normal, 
healthy humans understand what is meant by both 'harm' and 'suffering', 
certainly those who have reached adulthood,
 *   furthermore, insofar as it is clearly recognisable that continuing to 
exist as a human being requires access to and consumption of all manner of 
natural resources and human-made goods and services, it is in our interests to 
nurture and further the inclinations in ourselves and others to behave in ways 
supportive of cooperation for mutual and general benefit wherever this is 
reasonably possible, and certainly not to act destructively or disruptively 
unless it is clear that doing so will prevent a much greater harm from 
occurring.
It ought to be clear to all reasonable persons not engaged in self deception 
that in this modern era each and everyone of us is dependent - always - on at 
least a thousand other people doing the right thing, or trying to anyway. Thus 
the idea of 'manly', rugged, individualism is a romantic nonsense unless it 
also incorporates a recognition of mutual interdependence and the need for real 
fairness in social dealings at every level. Unless compassion, democracy and 
ethics are recognised [along with scientific method] as fundamental 
prerequisites for OUR survival, policies and practices will pretty much 
inevitably become self-defeating and destructive, no matter how 
well-intentioned to start with.
In the interest of brevity I add the following quasi-axioms.
 *   the advent of scientific method on Earth between 400 and 500 years ago has 
irreversibly transformed the human species so that now we can reasonably assert 
that the human universe is always potentially infinite, so long as it exists 
and we believe it to be so
 *   to be fully human requires taking responsibility for one's actions and 
this means consciously choosing to do things or accepting that one has made a 
choice even if one cannot remember consciously choosing
 *   nobody knows the future, so all statements about the future are either 
guesswork or statements of desires. Furthermore our lack of knowledge of times 
to come is very deep, such that we have no truly reasonable basis for 
dismissing the right to survive of any persons on the planet - or other living 
species for that matter - unless it can be clearly shown that such killing or 
allowing to die, is necessary to prevent some far greater harm and the 
assertion of this is of course hampered precisely by our lack of knowledge of 
the future
This feels incomplete but it needs to be sent.
Regards
Mark Peaty  CDES
[EMAIL PROTECTED]mailto:[EMAIL PROTECTED]
http://www.arach.net.au/~mpeaty/


I agree with you as far as advice for how to live a good life goes, but I guess where 
I disagree is on the technical matter of what we call 

RE: computer pain

2006-12-23 Thread Stathis Papaioannou



Brent Meeker writes:

[Mark Peaty]

 From the foregoing it can be seen that while there can be no objective
 morality, nor any absolute morality, it is reasonable to expect
 increasing agreement on the relative morality of actions within an
 expanding context.  Further, similar to the entropic arrow of time, we
 can conceive of an arrow of morality corresponding to the ratcheting
 forward of an increasingly broad context of shared values (survivors of
 coevolutionary competition) promoted via awareness of increasingly
 effective principles of interaction (scientific knowledge of what works,
 extracted from regularities in the environment.)



[Stathis Papaioannou]
 What if the ratcheting forward of shared values is at odds with 
 evolutionary expediency, i.e. there is some unethical policy that 
 improves the fitness of the species? To avoid such a dilemna you would 
 have define as ethical everything improves the fitness of the species, 
 and I'm not sure you want to do that.


If your species doesn't define as unethical that which is contrary to continuation of the 
species, your species won't be around to long.  Our problem is that cultural evolution 
has been so rapid compared to biological evolution that some of our hardwired values are 
not so good for continuation of our (and many other) species.  I don't think ethics is a 
matter of definitions; that's like trying to fly by settling on a definition of 
airplane.  But looking at the long run survival of the species might produce 
some good ethical rules; particularly if we could predict the future consequences clearly.


If slavery could be scientifically shown to promote the well-being of the species 
as a whole does that mean we should have slavery? Does it mean that slavery 
is good?


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-23 Thread Stathis Papaioannou



Oops, it was Jef Allbright, not Mark Peaty responsible for the first quote 
below.






From: [EMAIL PROTECTED]
To: everything-list@googlegroups.com
Subject: RE: computer pain
Date: Sun, 24 Dec 2006 15:31:03 +1100



Brent Meeker writes:

[Mark Peaty]
  From the foregoing it can be seen that while there can be no objective
  morality, nor any absolute morality, it is reasonable to expect
  increasing agreement on the relative morality of actions within an
  expanding context.  Further, similar to the entropic arrow of time, we
  can conceive of an arrow of morality corresponding to the ratcheting
  forward of an increasingly broad context of shared values (survivors of
  coevolutionary competition) promoted via awareness of increasingly
  effective principles of interaction (scientific knowledge of what works,
  extracted from regularities in the environment.)


[Stathis Papaioannou]
  What if the ratcheting forward of shared values is at odds with 
  evolutionary expediency, i.e. there is some unethical policy that 
  improves the fitness of the species? To avoid such a dilemna you would 
  have define as ethical everything improves the fitness of the species, 
  and I'm not sure you want to do that.
 
 If your species doesn't define as unethical that which is contrary to continuation of the species, your species won't be around to long.  Our problem is that cultural evolution has been so rapid compared to biological evolution that some of our hardwired values are not so good for continuation of our (and many other) species.  I don't think ethics is a matter of definitions; that's like trying to fly by settling on a definition of airplane.  But looking at the long run survival of the species might produce some good ethical rules; particularly if we could predict the future consequences clearly.


If slavery could be scientifically shown to promote the well-being of the species 
as a whole does that mean we should have slavery? Does it mean that slavery 
is good?
 
Stathis Papaioannou

_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
 


_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-23 Thread Stathis Papaioannou



Peter Jones writes:


  (1) Although moral assessment is inherently subjective--being relative
  to internal values--all rational agents share some values in common due
  to sharing a common evolutionary heritage or even more fundamentally,
  being subject to the same physical laws of the universe.

 That may be so, but we don't exactly have a lot of intelligent species to make
 the comparison. It is not difficult to imagine species with different 
evolutionary
 heritages which would have different ethics to our own, certainly in the 
details
 and probably in many of the core values.

It isn't difficult to imagine humans with different mores to our own,
particularly since the actual exist... the point
is not that they might believe certain things to be ethical;
the point is , what *is* actually ethical.

There is a difference between mores and morality
just as their is between belief and truth.


When I say I believe an empirical fact, I mean that if you go out and have a look 
and a poke, you will see that the empirical fact is so; and if you don't, tell me and 
I'll change my belief. Ethical beliefs are not like that because they are ultimately 
dependent on values. You can say you don't like someone's values, you can say that 
his values are contrary to evolution or whatever, but you can't say he is wrong 
about his values in the way he might be wrong about an empirical fact, because the 
only empirical claim he is making is about how he thinks and feels.


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-23 Thread Brent Meeker


Stathis Papaioannou wrote:



Brent Meeker writes:

[Mark Peaty]

 From the foregoing it can be seen that while there can be no objective
 morality, nor any absolute morality, it is reasonable to expect
 increasing agreement on the relative morality of actions within an
 expanding context.  Further, similar to the entropic arrow of time, we
 can conceive of an arrow of morality corresponding to the ratcheting
 forward of an increasingly broad context of shared values 
(survivors of

 coevolutionary competition) promoted via awareness of increasingly
 effective principles of interaction (scientific knowledge of what 
works,

 extracted from regularities in the environment.)



[Stathis Papaioannou]
 What if the ratcheting forward of shared values is at odds with  
evolutionary expediency, i.e. there is some unethical policy that  
improves the fitness of the species? To avoid such a dilemna you would 
 have define as ethical everything improves the fitness of the 
species,  and I'm not sure you want to do that.


If your species doesn't define as unethical that which is contrary to 
continuation of the species, your species won't be around to long.  
Our problem is that cultural evolution has been so rapid compared to 
biological evolution that some of our hardwired values are not so good 
for continuation of our (and many other) species.  I don't think 
ethics is a matter of definitions; that's like trying to fly by 
settling on a definition of airplane.  But looking at the long run 
survival of the species might produce some good ethical rules; 
particularly if we could predict the future consequences clearly.


If slavery could be scientifically shown to promote the well-being of 
the species as a whole does that mean we should have slavery? Does it 
mean that slavery is good?


Note that I didn't say promote the well-being; I said contrary to the 
continuation.  If the species could not continue without slavery, then there are two possible 
futures.  In one of them there's a species that thinks slavery is OK - in the other there is no 
opinion on the subject.

Of course slavery implies the coercive use of our fellow members of society 
against their desires.  So it logically entails that at least those enslaved will not be 
pleased with their situation.  But note that in ancient times one had an absolute right 
to one's life - including selling oneself into slavery, or contracting to be a slave for 
a certain time.  So someone (maybe a radical libertarian) might argue that you should be 
able to risk your own enslavement in exchange for some gain desirable to you.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-22 Thread Stathis Papaioannou



Jef Allbright writes:


peterdjones wrote:

 Moral and natural laws.
 
 
 An investigation of natural laws, and, in parallel, a defence 
 of ethical objectivism.The objectivity, to at least some 
 extent, of science will be assumed; the sceptic may differ, 
 but there is no convincing some people).


snip

 As ethical objectivism is a work-in-progress 
 there are many variants, and a considerable literature 
 discussing which is the correct one.


I agree with the thrust of this post and I think there are a few key
concepts which can further clarify thinking on this subject:

(1) Although moral assessment is inherently subjective--being relative
to internal values--all rational agents share some values in common due
to sharing a common evolutionary heritage or even more fundamentally,
being subject to the same physical laws of the universe.


That may be so, but we don't exactly have a lot of intelligent species to make 
the comparison. It is not difficult to imagine species with different evolutionary 
heritages which would have different ethics to our own, certainly in the details 
and probably in many of the core values.



(2) From the point of view of any subjective agent, what is good is
what is assessed to promote the agent's values into the future.

(3) From the point of view of any subjective agent, what is better is
what is assessed as good over increasing scope.

(4) From the point of view of any subjective agent, what is increasingly
right or moral, is decision-making assessed as promoting increasingly
shared values over increasing scope of agents and interactions.

From the foregoing it can be seen that while there can be no objective
morality, nor any absolute morality, it is reasonable to expect
increasing agreement on the relative morality of actions within an
expanding context.  Further, similar to the entropic arrow of time, we
can conceive of an arrow of morality corresponding to the ratcheting
forward of an increasingly broad context of shared values (survivors of
coevolutionary competition) promoted via awareness of increasingly
effective principles of interaction (scientific knowledge of what works,
extracted from regularities in the environment.)


What if the ratcheting forward of shared values is at odds with evolutionary 
expediency, i.e. there is some unethical policy that improves the fitness of the 
species? To avoid such a dilemna you would have define as ethical everything 
improves the fitness of the species, and I'm not sure you want to do that. 


Further, from this theory of metaethics we can derive a practical system
of social decision-making based on (1) increasing fine-grained knowledge
of shared values, and (2) application of increasingly effective
principles, selected with regard to models of probable outcomes in a
Rawlsian mode of broad rather than narrow self-interest.


This is really quite a good proposal for building better societies, and one that 
I would go along with, but meta-ethical problems arise if someone simply 
rejects that shared values are important (eg. believes that the values of the 
strong outweigh those of the weak), and ethical problems arise when it is 
time to decide what exactly these shared values are and how they should 
best be promoted. You know this of course, and it is what makes ethics and 
aesthetics different to the natural sciences.



I apologize for the extremely terse and sparse nature of this outline,
but I wanted to contribute these keystones despite lacking the time to
provide expanded background, examples, justifications, or
clarifications.  I hope that these seeds of thought may contribute to a
flourishing garden both on and offlist.

- Jef


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-22 Thread Mark Peaty
Sorry to be so slow at responding here but life [domestic], the universe 
and everything else right now is competing savagely with this 
interesting discussion. [But one must always think positive; 'Bah, 
Humbug!' is not appropriate, even though the temptation is great some 
times :-]


Stathis,
I am not entirely convinced when you say: 'And the psychopath is right: 
no-one can actually fault him on a point of fact or a point of logic'
That would only be right if we allowed that his [psychopathy is mostly a 
male affliction I believe] use of words is easily as reasonable as yours 
or mine. However, where the said psycho. is purporting to make 
authoritative statements about the world, it is not OK for him to 
purport that what he describes is unquestionably factual and his 
reasoning from the facts as he sees them is necessarily authoritative 
for anyone else. This is because, qua psychopath, he is not able to make 
the fullest possible free decisions about what makes people tick or even 
about what is reality for the rest of us. He is, in a sense, mortally 
wounded, and forever impaired; condemned always to make only 'logical' 
decisions. :-)


The way I see it, roughly and readily, is that there are in fact certain 
statements/descriptions about the world and our place in it which are 
MUCH MORE REASONABLE than a whole lot of others. I think therefore that, 
even though you might be right from a 'purely logical' point of view 
when you say the following: 'In the *final* analysis, ethical beliefs 
are not a matter of fact or logic, and if it seems that they are then 
there is a hidden assumption somewhere'
in fact, from the point of view of practical living and the necessities 
of survival, the correct approach is to assert what amounts to a set of 
practical axioms, including:


   * the mere fact of existence is the basis of value, that good and
 bad are expressed differently within - and between - different
 cultures and their sub-cultures but ultimately there is an
 objective, absolute basis for the concept of 'goodness', because
 in all normal circumstances it is better to exist than not to exist,
   * related to this and arising out of it is the realisation that all
 normal, healthy humans understand what is meant by both 'harm' and
 'suffering', certainly those who have reached adulthood,
   * furthermore, insofar as it is clearly recognisable that continuing
 to exist as a human being requires access to and consumption of
 all manner of natural resources and human-made goods and services,
 it is in our interests to nurture and further the inclinations in
 ourselves and others to behave in ways supportive of cooperation
 for mutual and general benefit wherever this is reasonably
 possible, and certainly not to act destructively or disruptively
 unless it is clear that doing so will prevent a much greater harm
 from occurring.

It ought to be clear to all reasonable persons not engaged in self 
deception that in this modern era each and everyone of us is dependent - 
always - on at least a thousand other people doing the right thing, or 
trying to anyway. Thus the idea of 'manly', rugged, individualism is a 
romantic nonsense unless it also incorporates a recognition of mutual 
interdependence and the need for real fairness in social dealings at 
every level. Unless compassion, democracy and ethics are recognised 
[along with scientific method] as fundamental prerequisites for OUR 
survival, policies and practices will pretty much inevitably become 
self-defeating and destructive, no matter how well-intentioned to start 
with.


In the interest of brevity I add the following quasi-axioms.

   * the advent of scientific method on Earth between 400 and 500 years
 ago has irreversibly transformed the human species so that now we
 can reasonably assert that the human universe is always
 potentially infinite, so long as it exists and we believe it to be so
   * to be fully human requires taking responsibility for one's actions
 and this means consciously choosing to do things or accepting that
 one has made a choice even if one cannot remember consciously choosing
   * nobody knows the future, so all statements about the future are
 either guesswork or statements of desires. Furthermore our lack of
 knowledge of times to come is very deep, such that we have no
 truly reasonable basis for dismissing the right to survive of any
 persons on the planet - or other living species for that matter -
 unless it can be clearly shown that such killing or allowing to
 die, is necessary to prevent some far greater harm and the
 assertion of this is of course hampered precisely by our lack of
 knowledge of the future
  


This feels incomplete but it needs to be sent.

Regards
Mark Peaty  CDES
[EMAIL PROTECTED]
http://www.arach.net.au/~mpeaty/



Stathis Papaioannou wrote:



Brent meeker writes:



Stathis 

Re: computer pain

2006-12-22 Thread Brent Meeker


Stathis Papaioannou wrote:



Jef Allbright writes:


peterdjones wrote:

 Moral and natural laws.
   An investigation of natural laws, and, in parallel, a defence  
of ethical objectivism.The objectivity, to at least some  extent, of 
science will be assumed; the sceptic may differ,  but there is no 
convincing some people).


snip

 As ethical objectivism is a work-in-progress  there are many 
variants, and a considerable literature  discussing which is the 
correct one.


I agree with the thrust of this post and I think there are a few key
concepts which can further clarify thinking on this subject:

(1) Although moral assessment is inherently subjective--being relative
to internal values--all rational agents share some values in common due
to sharing a common evolutionary heritage or even more fundamentally,
being subject to the same physical laws of the universe.


That may be so, but we don't exactly have a lot of intelligent species 
to make the comparison. It is not difficult to imagine species with 
different evolutionary heritages which would have different ethics to 
our own, certainly in the details and probably in many of the core values.


Imagine?  Don't you know any women?  :-)




(2) From the point of view of any subjective agent, what is good is
what is assessed to promote the agent's values into the future.

(3) From the point of view of any subjective agent, what is better is
what is assessed as good over increasing scope.

(4) From the point of view of any subjective agent, what is increasingly
right or moral, is decision-making assessed as promoting increasingly
shared values over increasing scope of agents and interactions.

From the foregoing it can be seen that while there can be no objective
morality, nor any absolute morality, it is reasonable to expect
increasing agreement on the relative morality of actions within an
expanding context.  Further, similar to the entropic arrow of time, we
can conceive of an arrow of morality corresponding to the ratcheting
forward of an increasingly broad context of shared values (survivors of
coevolutionary competition) promoted via awareness of increasingly
effective principles of interaction (scientific knowledge of what works,
extracted from regularities in the environment.)


What if the ratcheting forward of shared values is at odds with 
evolutionary expediency, i.e. there is some unethical policy that 
improves the fitness of the species? To avoid such a dilemna you would 
have define as ethical everything improves the fitness of the species, 
and I'm not sure you want to do that.


If your species doesn't define as unethical that which is contrary to continuation of the 
species, your species won't be around to long.  Our problem is that cultural evolution 
has been so rapid compared to biological evolution that some of our hardwired values are 
not so good for continuation of our (and many other) species.  I don't think ethics is a 
matter of definitions; that's like trying to fly by settling on a definition of 
airplane.  But looking at the long run survival of the species might produce 
some good ethical rules; particularly if we could predict the future consequences clearly.


Further, from this theory of metaethics we can derive a practical system
of social decision-making based on (1) increasing fine-grained knowledge
of shared values, and (2) application of increasingly effective
principles, selected with regard to models of probable outcomes in a
Rawlsian mode of broad rather than narrow self-interest.


This is really quite a good proposal for building better societies, and 
one that I would go along with, but meta-ethical problems arise if 
someone simply rejects that shared values are important (eg. believes 
that the values of the strong outweigh those of the weak), 


Historically this problem has been dealt with by those who think shared values are important ganging up on those who don't.  

and ethical 
problems arise when it is time to decide what exactly these shared 
values are and how they should best be promoted. 


Aye, there's the rub.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-22 Thread Jef Allbright


Stathis Papaioannou wrote:


Brent Meeker writes:

 Well said!  I agree almost completely - I'm a little 
uncertain about (3) and (4) above and the meaning of scope. 
 Together with the qualifications of Peter Jones regarding 
the lack of universal agreement on even the best supported 
theories of science, you have provided a good outline of the 
development of ethics in a way parallel with the scientific 
development of knowledge.
 
 There's a good paper on the relation facts and values by 
Oliver Curry which bears on many of the above points:
 
 http://human-nature.com/ep/downloads/ep04234247.pdf


That is a well-written paper, particularly good on an 
explanation of the naturalistic fallacy, covering what we 
have been discussing in this thread (and the parallel thread 
on evil etc. with which it seems to have crossed over).  
Basically, the paper argues that Humes edict that you can't 
get is from ought is no impediment to a naturalistic 
explanation of ethics, and that incidentally Hume himself had 
a naturalistic explanation. Another statement of the 
naturalistic fallacy is that explanation is not the same as 
justification: 
that while Darwinian mechanisms may explain why we have 
certain ethical systems that does not constitute 
justification for those sytems. To this Curry counters:


In case this is all rather abstract, let me re-state the 
point by way of an analogy. Suppose that instead of being 
about morality and why people find certain things morally 
good and bad, this article had been about sweetness, and why 
people find certain things sweet and certain things sour. The 
Humean-Darwinian would have argued that humans have an 
evolved digestive system that distinguishes between good and 
bad sources of nutrition and energy; and that the human 
'sweet tooth' is an evolved preference for foods with high 
sugar-content over foods with low sugar-content. If one 
accepted this premise, it would make no sense to complain 
that evolution may have explained why humans find certain 
things sweet, but it cannot tell us whether these things are 
really sweet or not. It follows from the premises of the 
argument that there is no criterion of sweetness independent 
of human psychology, and hence this question cannot arise.


That's fine if we stop at explanation at the descriptive 
level. But sweetness lacks the further dimension of ought: 
if I say sugar is sweet I am stating a fact about the 
relationship between sugar and my tastebuds, while if I say 
murder is bad I am not only stating a fact about how I feel 
about it, I am also making a profound claim about the world. 
In a sense, I think this latter claim or feeling is illusory 
and there is nothing to it beyond genes and upbringing, but I 
still have it, and moreover I can have such feelings in 
conflict with genes and upbringing. As G.E. Moore said (also 
quoted in the article), if I identify good with some 
natural object X, it is always possible to ask, is X good?, 
which means that good must essentially be something else, 
simple, indefinable, unanalysable object of thought, which 
only contingently coincides with natural objects or their 
properties. The same applies even if you include as natural 
object commands from God. 



I was preparing a response to related questions from Stathis in a
separate post when I noticed that he had already done an excellent job
of clarifying the issue here.  I would add only the following:

The fundamental importance of context cannot be overemphasized in
discussions of Self, Free-will, Morality, etc., anywhere that the
subjective and the objective are considered together.  Like
particle/wave duality, we can only get answers consistent with the
context of our questions.

* Many have attempted to bridge the gap between is and ought, but
haven't fully grasped the futility of attempting to find the
intersection of a point of view and its inverse.
* Many have shaken their heads wisely and stated that is and ought are
entirely disjoint, so nothing useful can be said about any supposed
relations between the two.
* Very few have realized the essential relativity of ALL our models of
thought, that there is no privileged frame of reference for making
objective distinctions between is and ought because we are inextricably
part of the system we are trying to describe, and THAT is what grounds
the subjective within the objective.

There can be no absolute or objective basis for claims of moral value,
because subjective assessment is intrinsic to the issue.
But we, as effective agents within the context of an evolving
environment, can *absolutely agree* that:
* subjective assessments have objective consequences, which then feed
back to influence future subjective assessments.
* actions are assessed as good to the extent that they are perceived
to promote into the future the present values of the (necessarily
subjective) assessor.
* actions are assessed as better to the extent that they are perceived
to promote 

RE: computer pain

2006-12-22 Thread Jef Allbright


Brent Meeker wrote:


Stathis Papaioannou wrote:
 
Jef Allbright writes:


snip


Further, from this theory of metaethics we can derive
a practical  system of social decision-making based
on (1) increasing fine-grained knowledge of shared values,
and (2) application of increasingly effective principles,
selected with regard to models of probable outcomes in
a Rawlsian mode of broad rather than narrow self-interest.


This is really quite a good proposal for building better
societies, and one that I would go along with, but meta-ethical 
problems arise if someone simply rejects that shared values

are important (eg. believes that the values of the strong
outweigh those of the weak),


Historically this problem has been dealt with by those who 
think shared values are important ganging up on those who don't.  


and ethical
problems arise when it is time to decide what exactly these
shared values are and how they should best be promoted.


Aye, there's the rub.


Because any decision-making is done within a limited context, but the
consequences arise within a necessarily larger (future) context, we can
never be sure of the exact consequences of our decisions.  Therefore, we
should strive for decision-making that is increasingly
*right-in-principle*, given our best knowledge of the situation at the
time. Higher-quality principles can be recognized by their greater scope
of applicability and subtlety (more powerful but relatively fewer
side-effects).

With Sthathis' elucidation of the Natural Fallacy in a separate post,
and Brent's comments here (more down-to-earth and easily readable, less
abstract than my own would have been) I have very little to add.

- Jef

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-22 Thread Brent Meeker


Jef Allbright wrote:


Immediately upon hitting Send on the previous post, I noticed that I had
failed to address a remaining point, below.

Brent Meeker wrote:
  Stathis Papaioannou wrote:
   Jef Allbright writes:

snip

 Further, from this theory of metaethics we can derive a practical 
  system of social decision-making based on (1) increasing  
fine-grained knowledge of shared values, and (2) application of  
increasingly effective principles, selected with regard to models of 
 probable outcomes in a Rawlsian mode of broad rather than narrow 
 self-interest.
  This is really quite a good proposal for building better 
societies,  and one that I would go along with, but meta-ethical 
problems arise  if someone simply rejects that shared values are 
important (eg.  believes that the values of the strong outweigh 
those of the weak),
  Historically this problem has been dealt with by those who think  
shared values are important ganging up on those who don't.

  and ethical
 problems arise when it is time to decide what exactly these shared 
 values are and how they should best be promoted.

  Aye, there's the rub.

Because any decision-making is done within a limited context, but the 
consequences arise within a necessarily larger (future) context, we 
can never be sure of the exact consequences of our decisions.  
Therefore, we should strive for decision-making that is increasingly 
*right-in-principle*, given our best knowledge of the situation at the 
time. Higher-quality principles can be recognized by their greater 
scope of applicability and subtlety (more powerful but relatively 
fewer side-effects).




It's an interesting question as to how we might best know our
fine-grained human values across an entire population, given that we can
hardly begin to express them ourselves, let alone their complex internal
and external relationships and dependencies.  There's also the question
of sufficient motivation, since very few of us would want to spend a
great deal of time answering (and later updating) questionnaires.

The best (possibly) workable idea I have is to use story-telling.  It
might be done in the form of a game of collaborative story-telling where
people would contribute short scenarios where the actions and
interactions of the characters would encode systems of values. Then,
software could analyze the text, extract significant features into a
high-dimensional array of vectors, and from there, principle component
analysis, clustering, rankings of association and similarity could be
done mathematically via unsupervised software with the higher level
information available for visualization. This idea needs more fleshing
out and it might be possible to perform limited validation of the
concept using the existing (and growing) corpus of fictional literature
available in digital form.


When people tell me, in defense of an omnibenevolent God, that this is the best 
of all possible worlds, I point out to them that in Hollywood movies, good 
always triumphs over evil...and these movies are widely recognized as 
unrealistic.

Brent Meeker
No good deed goes unpunished.
--- Claire Booth Luce, U.S. Senator

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-22 Thread Brent Meeker


1Z wrote:



Stathis Papaioannou wrote:

Jef Allbright writes:

 peterdjones wrote:

  Moral and natural laws.
 
 
  An investigation of natural laws, and, in parallel, a defence
  of ethical objectivism.The objectivity, to at least some
  extent, of science will be assumed; the sceptic may differ,
  but there is no convincing some people).

 snip

  As ethical objectivism is a work-in-progress
  there are many variants, and a considerable literature
  discussing which is the correct one.

 I agree with the thrust of this post and I think there are a few key
 concepts which can further clarify thinking on this subject:

 (1) Although moral assessment is inherently subjective--being relative
 to internal values--all rational agents share some values in common due
 to sharing a common evolutionary heritage or even more fundamentally,
 being subject to the same physical laws of the universe.

That may be so, but we don't exactly have a lot of intelligent species 
to make
the comparison. It is not difficult to imagine species with different 
evolutionary
heritages which would have different ethics to our own, certainly in 
the details

and probably in many of the core values.


It isn't difficult to imagine humans with different mores to our own,
particularly since the actual exist... the point
is not that they might believe certain things to be ethical;
the point is , what *is* actually ethical.


If you try to change their ethics, you can only do it by appealing to their 
values.  Their values are objective in the sense that they can be discovered.  
And some ethical systems will promote those values better or more broadly than 
others.  But I don't see any basis for judging the values themselves as good or 
bad.  You could weigh them according to how likely they are to propagate 
themselves - like Dawkins' evolution of memes, but I don't think that's what 
you mean.



There is a difference between mores and morality
just as their is between belief and truth.


If everyone believes the Earth is flat one can sail around it and show that 
belief is false.  If everyone believes miscegenation is immoral, how could that 
morality be shown to be wrong?  Not by marrying a person of a different race.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-22 Thread John Mikes

I really should not, but here it goes:
Brent, you seem to value the conventional ways given by the model used to
formulate physical sciences and Euclidian geometry etc. over mental ways or
ideational arguments.
(There may be considerations to judge mixed marriages for good argumentation
without waiting for physically observable damages.)
Imagine (since Einstein introduced us to spacetime-curvatures already) that
the Earth IS flat with the format-proviso that as you approach the rim it
changes your straight-line progressing: the closer you get the more it
changes (something like the big mass ujpon spacetime -  mutatis mutandis).
So as you close in to the rim, instead of falling off, you curved backwards
and arrive (on a different route) at the point of starting. (No proper
geometry have I devised for that so far),
It would seem, that the Earth is spherical and yuou circumnavigatged it.
Like Paul Churchland's tribe who formulated heat as a fluid changing colors
according to its concentration (in ho book Consciousness).and  not some
ridi\culous vibrations as some human physicists believe.
For the innocent bystander: I do not believe this Flat Earth theory.

Merry Christmas

John M


On 12/22/06, Brent Meeker [EMAIL PROTECTED] wrote:



1Z wrote:


 Stathis Papaioannou wrote:
 Jef Allbright writes:

  peterdjones wrote:
 
   Moral and natural laws.
  
  
   An investigation of natural laws, and, in parallel, a defence
   of ethical objectivism.The objectivity, to at least some
   extent, of science will be assumed; the sceptic may differ,
   but there is no convincing some people).
 
  snip
 
   As ethical objectivism is a work-in-progress
   there are many variants, and a considerable literature
   discussing which is the correct one.
 
  I agree with the thrust of this post and I think there are a few key
  concepts which can further clarify thinking on this subject:
 
  (1) Although moral assessment is inherently subjective--being
relative
  to internal values--all rational agents share some values in common
due
  to sharing a common evolutionary heritage or even more fundamentally,
  being subject to the same physical laws of the universe.

 That may be so, but we don't exactly have a lot of intelligent species
 to make
 the comparison. It is not difficult to imagine species with different
 evolutionary
 heritages which would have different ethics to our own, certainly in
 the details
 and probably in many of the core values.

 It isn't difficult to imagine humans with different mores to our own,
 particularly since the actual exist... the point
 is not that they might believe certain things to be ethical;
 the point is , what *is* actually ethical.

If you try to change their ethics, you can only do it by appealing to
their values.  Their values are objective in the sense that they can be
discovered.  And some ethical systems will promote those values better or
more broadly than others.  But I don't see any basis for judging the values
themselves as good or bad.  You could weigh them according to how likely
they are to propagate themselves - like Dawkins' evolution of memes, but I
don't think that's what you mean.


 There is a difference between mores and morality
 just as their is between belief and truth.

If everyone believes the Earth is flat one can sail around it and show
that belief is false.  If everyone believes miscegenation is immoral, how
could that morality be shown to be wrong?  Not by marrying a person of a
different race.

Brent Meeker





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-22 Thread Brent Meeker


John Mikes wrote:

I really should not, but here it goes:
Brent, you seem to value the conventional ways given by the model used 
to formulate physical sciences and Euclidian geometry etc. over mental 
ways or ideational arguments.


All models are mental and ideational.  That's why they are models.  Can you explain what you mean 
by conventional and unconventional?

(There may be considerations to judge mixed marriages for good 
argumentation without waiting for physically observable damages.)
Imagine (since Einstein introduced us to spacetime-curvatures already) 
that the Earth IS flat with the format-proviso that as you approach the 
rim it changes your straight-line progressing: the closer you get the 
more it changes (something like the big mass ujpon spacetime -  mutatis 
mutandis). So as you close in to the rim, instead of falling off, you 
curved backwards and arrive (on a different route) at the point of 
starting. (No proper geometry have I devised for that so far),

It would seem, that the Earth is spherical and yuou circumnavigatged it.


And this would be different from a spherical Earth how?

Like Paul Churchland's tribe who formulated heat as a fluid changing 
colors according to its concentration (in ho book Consciousness).and  
not some ridi\culous vibrations as some human physicists believe.


What's your point?...that any observation can be explained in more than one way and since 
we cannot apprehend reality itself we must remain agnostic and indifferent 
between a flat and spherical Earth?


For the innocent bystander: I do not believe this Flat Earth theory.


So why don't you believe it?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-21 Thread Stathis Papaioannou






Peter Jones writes:


 Perhaps none of the participants in this thread really disagree. Let me see 
if I
 can summarise:

 Individuals and societies have arrived at ethical beliefs for a reason, 
whether that be
 evolution, what their parents taught them, or what it says in a book believed 
to be divinely
 inspired. Perhaps all of these reasons can be subsumed under evolution if 
that term can
 be extended beyond genetics to include all the ideas, beliefs, customs etc. 
that help a
 society to survive and propagate itself. Now, we can take this and formalise 
it in some way
 so that we can discuss ethical questions rationally:

 Murder is bad because it reduces the net happiness in society - Utilitarianism

 Murder is bed because it breaks the sixth commandment - Judaism and 
Christianity
 (interesting that this only no. 6 on a list of 10: God knows his priorities)

 Ethics then becomes objective, given the rules. The meta-ethical explanation 
of evolution,
 broadly understood, as generating the various ethical systems is also 
objective. However,
 it is possible for someone at the bottom of the heap to go over the head of 
utilitarianism,
 evolution, even God and say:

 Why should murder be bad? I don't care about the greatest good for the 
greatest number,
 I don't care if the species dies out, and I think God is a bastard and will 
shout it from hell if
 sends me there for killing people for fun and profit. This is my own personal 
ethical belief,
 and you can't tell me I'm wrong!

 And the psychopath is right: no-one can actually fault him on a point of fact 
or a point of
 logic.

The psychopath is wrong. He doesn't want to be murdered, but
he wants to murder. His ethical rule is therefore inconsistent and
not
really ethical at all.


Who says his ethical rule is inconsistent? If he made the claim do unto others as you would have 
others do unto you he would be inconsistent, but he makes no such claim. Billions of people have 
lived and died in societies where it is perfectly ethical and acceptable to kill inferior races or inferior 
species. If they accept some version of the edict you have just elevated to a self-evident truth it 
would be do unto others as you would have them do unto you, unless they are foreigners, or taste 
good to eat, or worship different gods. Perfectly consistent, even if horrible.



  In the *final* analysis, ethical beliefs are not a matter of fact or logic, 
and if it seems
 that they are then there is a hidden assumption somewhere.

Everything starts with assumptions. The questions is whether they
are correct.  A lunatic could try defining 2+2=5 as valid, but
he will soon run into inconsistencies. That is why we reject
2+2=5. Ethical rules must apply to everybody as a matter of
definition. Definitions supply correct assumptions.


So you think arguments about such matters as abortion, capital punishment and what sort of 
social welfare system we should have are just like arguments about mathematics or geology, 
and with enough research there should be universal agreement?


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-21 Thread Stathis Papaioannou



Peter Jones writes:


It is indisputable that morality varies in practice across communities.
But the contention of ethical objectivism is not that everyone actually
does hold to a single objective system of ethics; it is only that
ethical questions can be resolved objectively in principle. The
existence of an objective solution to any kind of problem is always
compatible with the existence of people who, for whatever reason, do
not subscribe. The roundness of the Earth is no less an objective fact
for the existence of believers in the Flat Earth theory.(It is odd that
the single most popular argument for ethical subjectivism has so little
logical force).


The Flat Earther is *wrong*. He claims that if you sail in a straight line you
will eventually fall off the edge. But if you do sail in a straight line, you don't 
don't fall off the edge; lots of people have done it. The psychopath, on the 
other hand, merely claims that if he kills someone, he does not think it is a bad 
thing. And indeed, he kills someone, and he does not think it is a bad thing. He 
is *not* wrong; there is no way you could even claim he is wrong, like the Flat 
Earther claiming that sailors have lied about circumnavigating the globe. You 
could argue that if everyone were a psychopath we would all be dead, and he 
might even agree with you that that would be the case, but then turn around 
and say, So what? Better dead than cissies! As Jamie Rose said, there were 
societies such as the Shakers who didn't mind if they died out and in fact did 
die out, and they are not usually considered immoral.


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-21 Thread 1Z



Stathis Papaioannou wrote:

Peter Jones writes:

  Perhaps none of the participants in this thread really disagree. Let me see 
if I
  can summarise:
 
  Individuals and societies have arrived at ethical beliefs for a reason, 
whether that be
  evolution, what their parents taught them, or what it says in a book 
believed to be divinely
  inspired. Perhaps all of these reasons can be subsumed under evolution if 
that term can
  be extended beyond genetics to include all the ideas, beliefs, customs etc. 
that help a
  society to survive and propagate itself. Now, we can take this and 
formalise it in some way
  so that we can discuss ethical questions rationally:
 
  Murder is bad because it reduces the net happiness in society - 
Utilitarianism
 
  Murder is bed because it breaks the sixth commandment - Judaism and 
Christianity
  (interesting that this only no. 6 on a list of 10: God knows his priorities)
 
  Ethics then becomes objective, given the rules. The meta-ethical 
explanation of evolution,
  broadly understood, as generating the various ethical systems is also 
objective. However,
  it is possible for someone at the bottom of the heap to go over the head of 
utilitarianism,
  evolution, even God and say:
 
  Why should murder be bad? I don't care about the greatest good for the 
greatest number,
  I don't care if the species dies out, and I think God is a bastard and will 
shout it from hell if
  sends me there for killing people for fun and profit. This is my own 
personal ethical belief,
  and you can't tell me I'm wrong!
 
  And the psychopath is right: no-one can actually fault him on a point of 
fact or a point of
  logic.

 The psychopath is wrong. He doesn't want to be murdered, but
 he wants to murder. His ethical rule is therefore inconsistent and
 not
 really ethical at all.

Who says his ethical rule is inconsistent? If he made the claim do unto others 
as you would have
others do unto you he would be inconsistent, but he makes no such claim.


He doesn't get to choose about that.2+2=5 is wrong because it leads
to inconsitistencies. No-one gets to wriggle out of that by saying
mathematics
doesn't need to be inconsistent.


 Billions of people have
lived and died in societies where it is perfectly ethical and acceptable to 
kill inferior races or inferior
species.


They have lived in societies where it was believed to be.  They may
well have believed the Earth was flat, too.


 If they accept some version of the edict you have just elevated to a 
self-evident truth it
would be do unto others as you would have them do unto you, unless they are 
foreigners, or taste
good to eat, or worship different gods. Perfectly consistent, even if horrible.


It is not consistent. The inconsistency has been built in with the
unless clause..


   In the *final* analysis, ethical beliefs are not a matter of fact or 
logic, and if it seems
  that they are then there is a hidden assumption somewhere.

 Everything starts with assumptions. The questions is whether they
 are correct.  A lunatic could try defining 2+2=5 as valid, but
 he will soon run into inconsistencies. That is why we reject
 2+2=5. Ethical rules must apply to everybody as a matter of
 definition. Definitions supply correct assumptions.

So you think arguments about such matters as abortion, capital punishment and 
what sort of
social welfare system we should have are just like arguments about mathematics 
or geology,
and with enough research there should be universal agreement?


They are a lot fuzzier. But economics is a lot fuzzier than
mathematics.


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-21 Thread 1Z



Stathis Papaioannou wrote:

Peter Jones writes:

 It is indisputable that morality varies in practice across communities.
 But the contention of ethical objectivism is not that everyone actually
 does hold to a single objective system of ethics; it is only that
 ethical questions can be resolved objectively in principle. The
 existence of an objective solution to any kind of problem is always
 compatible with the existence of people who, for whatever reason, do
 not subscribe. The roundness of the Earth is no less an objective fact
 for the existence of believers in the Flat Earth theory.(It is odd that
 the single most popular argument for ethical subjectivism has so little
 logical force).

The Flat Earther is *wrong*. He claims that if you sail in a straight line you
will eventually fall off the edge. But if you do sail in a straight line, you 
don't
don't fall off the edge; lots of people have done it. The psychopath, on the
other hand, merely claims that if he kills someone, he does not think it is a 
bad
thing.


That is no problem for objective ethics. The fact that someone thinks
not-X is always comaptible with the objective truth of X.


 And indeed, he kills someone, and he does not think it is a bad thing. He
is *not* wrong; there is no way you could even claim he is wrong,


He is not wrong about what he thinks. He is wrong about what
is true,. ethically.


 like the Flat
Earther claiming that sailors have lied about circumnavigating the globe. You
could argue that if everyone were a psychopath we would all be dead, and he
might even agree with you that that would be the case, but then turn around
and say, So what? Better dead than cissies! As Jamie Rose said, there were
societies such as the Shakers who didn't mind if they died out and in fact did
die out, and they are not usually considered immoral.


That's not the issue. It's not negotiable whether ethics is supposed
to lead to death and misery rather than life and happiness, any more
than there is a valid form of economics which is designed to achieve
abject poverty and societal breakdown in the shortest possible time.


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-21 Thread John Mikes

Stathis,
your 'augmentded' ethical maxim is excellent, I could add some more 'except
foe'-s to it.
(lower class, cast, or wealth, - language, - gender, etc.)

The last par, however, is prone to a more serious remark of mine:
topics like you sampled are culture related prejudicial beief-items.
Research cannot
solve them, because research is also ADJUSTED TO THE CULTURE  it serves.
A valid medeval research on the number of angels on a pin-tip would not hold
in
today's belief-topic of curved space. (Curved angels?)

Mey Christmas to you, too

John

On 12/21/06, Stathis Papaioannou  [EMAIL PROTECTED] wrote:







Peter Jones writes:

  Perhaps none of the participants in this thread really disagree. Let
me see if I
  can summarise:
 
  Individuals and societies have arrived at ethical beliefs for a
reason, whether that be
  evolution, what their parents taught them, or what it says in a book
believed to be divinely
  inspired. Perhaps all of these reasons can be subsumed under
evolution if that term can
  be extended beyond genetics to include all the ideas, beliefs, customs
etc. that help a
  society to survive and propagate itself. Now, we can take this and
formalise it in some way
  so that we can discuss ethical questions rationally:
 
  Murder is bad because it reduces the net happiness in society -
Utilitarianism
 
  Murder is bed because it breaks the sixth commandment - Judaism and
Christianity
  (interesting that this only no. 6 on a list of 10: God knows his
priorities)
 
  Ethics then becomes objective, given the rules. The meta-ethical
explanation of evolution,
  broadly understood, as generating the various ethical systems is also
objective. However,
  it is possible for someone at the bottom of the heap to go over the
head of utilitarianism,
  evolution, even God and say:
 
  Why should murder be bad? I don't care about the greatest good for
the greatest number,
  I don't care if the species dies out, and I think God is a bastard and
will shout it from hell if
  sends me there for killing people for fun and profit. This is my own
personal ethical belief,
  and you can't tell me I'm wrong!
 
  And the psychopath is right: no-one can actually fault him on a point
of fact or a point of
  logic.

 The psychopath is wrong. He doesn't want to be murdered, but
 he wants to murder. His ethical rule is therefore inconsistent and
 not
 really ethical at all.

Who says his ethical rule is inconsistent? If he made the claim do unto
others as you would have
others do unto you he would be inconsistent, but he makes no such claim.
Billions of people have
lived and died in societies where it is perfectly ethical and acceptable
to kill inferior races or inferior
species. If they accept some version of the edict you have just elevated
to a self-evident truth it
would be do unto others as you would have them do unto you, unless they
are foreigners, or taste
good to eat, or worship different gods. Perfectly consistent, even if
horrible.

   In the *final* analysis, ethical beliefs are not a matter of fact or
logic, and if it seems
  that they are then there is a hidden assumption somewhere.

 Everything starts with assumptions. The questions is whether they
 are correct.  A lunatic could try defining 2+2=5 as valid, but
 he will soon run into inconsistencies. That is why we reject
 2+2=5. Ethical rules must apply to everybody as a matter of
 definition. Definitions supply correct assumptions.

So you think arguments about such matters as abortion, capital punishment
and what sort of
social welfare system we should have are just like arguments about
mathematics or geology,
and with enough research there should be universal agreement?

Stathis Papaioannou



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-20 Thread Stathis Papaioannou



Brent meeker writes:



Stathis Papaioannou wrote:
 
 
 
 
 Brent meeker writes:
 
  Evolution explains why we have good and bad, but it doesn't explain 
 why  good and bad feel as they do, or why we *should* care about good 
 and  bad
 That's asking why we should care about what we should care about, i.e. 
 good and bad.  Good feels as it does because it is (or was) 
 evolutionarily advantageous to do that, e.g. have sex.  Bad feels as 
 it does because it is (or was) evolutionarily advantageous to not do 
 that, e.g. hold your hand in the fire.  If it felt good you'd do it, 
 because that's what feels good means, a feeling you want to have.
 
 But it is not an absurd question to ask whether something we have 
 evolved to think is good really is good. You are focussing on the 
 descriptive aspect of ethics and ignoring the normative. 


Right - because I don't think there is an normative aspect in the objective 
sense.

Even if it 
 could be shown that a certain ethical belief has been hardwired into our 
 brains this does not make the qustion of whether the belief is one we 
 ought to have an absurd one. We could decide that evolution sucks and we 
 have to deliberately flout it in every way we can. 


But we could only decide that by showing a conflict with something else we 
consider good.

It might not be a 
 wise policy but it is not *wrong* in the way it would be wrong to claim 
 that God made the world 6000 years ago.


I agree, because I think there is a objective sense in which the world is more 
than 6000yrs old.
 
 beyond following some imperative of evolution. For example, the Nazis 
  argued that eliminating inferior specimens from the gene pool would 
 ultimately  produce a superior species. Aside from their irrational 
 inclusion of certain  groups as inferior, they were right: we could 
 breed superior humans following  Nazi eugenic programs, and perhaps 
 on other worlds evolution has made such  programs a natural part of 
 life, regarded by everyone as good. Yet most of  us would regard 
 them as bad, regardless of their practical benefits.


 Would we?  Before the Nazis gave it a bad name, eugenics was a popular 
 movement in the U.S. mostly directed at sterilizing mentally retarded 
 people.  I think it would be regarded as bad simply because we don't 
 trust government power to be exercised prudently or to be easily 
 limited  - both practical considerations.  If eugenics is practiced 
 voluntarily, as it is being practiced in the U.S., I don't think 
 anyone will object (well a few fundamentalist luddites will).
 
 What about if we tested every child and allowed only the superior ones 
 to reproduce? The point is, many people would just say this is wrong, 
 regardless of the potential benefits to society or the species, and the 
 response to this is not that it is absurd to hold it as wrong (leaving 
 aside emotional rhetoric).


But people wouldn't *just* say this is wrong. This example is a question of societal policy. It's about what *we* will impose on *them*.  It is a question of ethics, not good and bad.  So in fact people would give reasons it was wrong: Who's gonna say what superior means?  Who gets to decide?   They might say, I just think it's bad. - but that would just be an implicit appeal to you to see whether you thought is was bad too.  Social policy can only be judged in terms of what the individual members of society think is good or bad. 


I think I'm losing the thread of what we're discussing here.  Are you holding 
that there are absolute norms of good/bad - as in your example of eugenics?


Perhaps none of the participants in this thread really disagree. Let me see if I 
can summarise:


Individuals and societies have arrived at ethical beliefs for a reason, whether that be 
evolution, what their parents taught them, or what it says in a book believed to be divinely 
inspired. Perhaps all of these reasons can be subsumed under evolution if that term can 
be extended beyond genetics to include all the ideas, beliefs, customs etc. that help a 
society to survive and propagate itself. Now, we can take this and formalise it in some way 
so that we can discuss ethical questions rationally:


Murder is bad because it reduces the net happiness in society - Utilitarianism

Murder is bed because it breaks the sixth commandment - Judaism and Christianity
(interesting that this only no. 6 on a list of 10: God knows his priorities)

Ethics then becomes objective, given the rules. The meta-ethical explanation of evolution, 
broadly understood, as generating the various ethical systems is also objective. However, 
it is possible for someone at the bottom of the heap to go over the head of utilitarianism, 
evolution, even God and say: 

Why should murder be bad? I don't care about the greatest good for the greatest number, 
I don't care if the species dies out, and I think God is a bastard and will shout it from hell if 
sends me there for killing people for fun and 

Re: computer pain

2006-12-20 Thread 1Z



Stathis Papaioannou wrote:

Brent meeker writes:


 Stathis Papaioannou wrote:
 
 
 
 
  Brent meeker writes:
 
   Evolution explains why we have good and bad, but it doesn't explain
  why  good and bad feel as they do, or why we *should* care about good
  and  bad
  That's asking why we should care about what we should care about, i.e.
  good and bad.  Good feels as it does because it is (or was)
  evolutionarily advantageous to do that, e.g. have sex.  Bad feels as
  it does because it is (or was) evolutionarily advantageous to not do
  that, e.g. hold your hand in the fire.  If it felt good you'd do it,
  because that's what feels good means, a feeling you want to have.
 
  But it is not an absurd question to ask whether something we have
  evolved to think is good really is good. You are focussing on the
  descriptive aspect of ethics and ignoring the normative.

 Right - because I don't think there is an normative aspect in the objective 
sense.

 Even if it
  could be shown that a certain ethical belief has been hardwired into our
  brains this does not make the qustion of whether the belief is one we
  ought to have an absurd one. We could decide that evolution sucks and we
  have to deliberately flout it in every way we can.

 But we could only decide that by showing a conflict with something else we 
consider good.

 It might not be a
  wise policy but it is not *wrong* in the way it would be wrong to claim
  that God made the world 6000 years ago.

 I agree, because I think there is a objective sense in which the world is 
more than 6000yrs old.

  beyond following some imperative of evolution. For example, the Nazis
   argued that eliminating inferior specimens from the gene pool would
  ultimately  produce a superior species. Aside from their irrational
  inclusion of certain  groups as inferior, they were right: we could
  breed superior humans following  Nazi eugenic programs, and perhaps
  on other worlds evolution has made such  programs a natural part of
  life, regarded by everyone as good. Yet most of  us would regard
  them as bad, regardless of their practical benefits.
 
  Would we?  Before the Nazis gave it a bad name, eugenics was a popular
  movement in the U.S. mostly directed at sterilizing mentally retarded
  people.  I think it would be regarded as bad simply because we don't
  trust government power to be exercised prudently or to be easily
  limited  - both practical considerations.  If eugenics is practiced
  voluntarily, as it is being practiced in the U.S., I don't think
  anyone will object (well a few fundamentalist luddites will).
 
  What about if we tested every child and allowed only the superior ones
  to reproduce? The point is, many people would just say this is wrong,
  regardless of the potential benefits to society or the species, and the
  response to this is not that it is absurd to hold it as wrong (leaving
  aside emotional rhetoric).

 But people wouldn't *just* say this is wrong. This example is a question of societal policy. It's 
about what *we* will impose on *them*.  It is a question of ethics, not good and bad.  So in fact 
people would give reasons it was wrong: Who's gonna say what superior means?  Who gets to 
decide?   They might say, I just think it's bad. - but that would just be an implicit 
appeal to you to see whether you thought is was bad too.  Social policy can only be judged in terms of 
what the individual members of society think is good or bad.

 I think I'm losing the thread of what we're discussing here.  Are you holding 
that there are absolute norms of good/bad - as in your example of eugenics?

Perhaps none of the participants in this thread really disagree. Let me see if I
can summarise:

Individuals and societies have arrived at ethical beliefs for a reason, whether 
that be
evolution, what their parents taught them, or what it says in a book believed 
to be divinely
inspired. Perhaps all of these reasons can be subsumed under evolution if 
that term can
be extended beyond genetics to include all the ideas, beliefs, customs etc. 
that help a
society to survive and propagate itself. Now, we can take this and formalise it 
in some way
so that we can discuss ethical questions rationally:

Murder is bad because it reduces the net happiness in society - Utilitarianism

Murder is bed because it breaks the sixth commandment - Judaism and Christianity
(interesting that this only no. 6 on a list of 10: God knows his priorities)

Ethics then becomes objective, given the rules. The meta-ethical explanation of 
evolution,
broadly understood, as generating the various ethical systems is also 
objective. However,
it is possible for someone at the bottom of the heap to go over the head of 
utilitarianism,
evolution, even God and say:

Why should murder be bad? I don't care about the greatest good for the 
greatest number,
I don't care if the species dies out, and I think God is a bastard and will 
shout it from hell if
sends me there for 

Re: computer pain

2006-12-20 Thread James N Rose




Stathis Papaioannou wrote:


Perhaps none of the participants in this thread really disagree.
Let me see if I can summarise:

Individuals and societies have arrived at ethical beliefs
for a reason, whether that be evolution, what their parents
taught them, or what it says in a book believed to be divinely
inspired. Perhaps all of these reasons can be subsumed under
evolution if that term can be extended beyond genetics to
include all the ideas, beliefs, customs etc. that help a
society to survive and propagate itself. Now, we can take
this and formalise it in some way so that we can discuss
ethical questions rationally:

Murder is bad because it reduces the net happiness
in society - Utilitarianism

Murder is bed because it breaks the sixth commandment
- Judaism and Christianity (interesting that this only
no. 6 on a list of 10: God [intuitive people] knows his
[know their] priorities)

Ethics then becomes objective, given the rules. The
meta-ethical explanation of evolution, broadly understood,
as generating the various ethical systems is also objective.
However, it is possible for someone at the bottom of the
heap to go over the head of utilitarianism, evolution, even
God and say:

Why should murder be bad? I don't care about the greatest
good for the greatest number, I don't care if the species
dies out, and I think God is a bastard and will shout it
from hell if sends me there for killing people for fun and
profit. This is my own personal ethical belief, and you can't
tell me I'm wrong!

And the psychopath is right: no-one can actually fault him
on a point of fact or a point of logic. In the *final* analysis,
ethical beliefs are not a matter of fact or logic, and if it seems
that they are then there is a hidden assumption somewhere.

Stathis Papaioannou



A bit convoluted and somewhat embellished, but essentially: correct.

And violence need not be the standard for an ethic leading to
problematic results.  The 19th century Christian sect Shakers
abhord reproduction  proseletyzing.  They were non-violent 
devout prayer based people, but their 'ethic' led to their own

extinction.

As impartial evaluators, it is sometimes difficult for us
to unemotionally unbiasedly categeorize human dynamics.

There are in any given human millieu a -variety- of 
parameter which have actionable behaviors that can 
be categorized beneficial/unbeneficial, preferrable/

unpreferrable, good/bad, constructive/destructive,
encouraging/disencouraging, not-evil/evil.

Any one parameter, or group of parameters can become the
'situational standard bearer' and other parameters fall 
where they may.  We value 'individuality' but some cultures
sacrifice individuals for the security of the collective. 
Different cultures will resist sacrificing until deemed

absolutely necessary. Others have a lesser requirment;
may even proactively sacrifice for strategic motivations.
And the 'positive' motivation is labelled 'altruism' -
sacrifice in the promotion of and alternative (sic-'greater')
benefit.  An 'evil' of one parameter re-cast as a 'good'
for another.

Killers -do- have a rationale and 'logic' they function
under. And it can be 'objectively correct'.  IF  -- if and
only if - the parameters' assumptions/decisions are accepted
as utile, correct, tenable.

Jamie


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-20 Thread 1Z



Moral and natural laws.


An investigation of natural laws, and, in parallel, a defence of
ethical objectivism.The objectivity, to at least some extent, of
science will be assumed; the sceptic may differ, but there is no
convincing some people).

At first glance, morality looks as though it should work objectively.
The mere fact that we praise and condemn people's moral behaviour
indicates that we think a common set of rules is applicable to us and
them. To put it another way, if ethics were strongly subjective anyone
could get off the hook by devising a system of personal morality in
which whatever they felt like doing was permissible. It would be hard
to see the difference between such a state of affairs and having no
morality at all. The subtler sort of subjectivist (or relativist) tries
to ameliorate this problem by claiming that moral principles re defined
at the societal level, but similar problems recur -- a society (such as
the Thuggees or Assassins) could declare that murder is OK with them.
These considerations are of course an appeal to how morality seems to
work as a 'language game' and as such do not put ethics on a firm
foundation -- the language game could be groundless. I will argue that
it is not, but first the other side of the argument needs to be put.

It is indisputable that morality varies in practice across communities.
But the contention of ethical objectivism is not that everyone actually
does hold to a single objective system of ethics; it is only that
ethical questions can be resolved objectively in principle. The
existence of an objective solution to any kind of problem is always
compatible with the existence of people who, for whatever reason, do
not subscribe. The roundness of the Earth is no less an objective fact
for the existence of believers in the Flat Earth theory.(It is odd that
the single most popular argument for ethical subjectivism has so little
logical force).

Another objection is that an objective system of ethics must be
accepted by everybody, irrespective of their motivations, and must
therefore be based in self-interest. Again, this gets the nature of
objectivity wrong. The fact that some people cannot see does not make
any empirical evidence less objective, the fact that some people refuse
to employ logic does not make logical argument any less objective. All
claims to objectivity make the background assumption that the people
who will actually employ the objective methodology in question are
willing and able. We will return to this topic toward the end.

Some people insist that anyone who is promoting ethical objectivism and
opposing relativism must be doing so in order to illegitamately promote
their own ethical system as absolute. While this is problably
pragmatically true in many cases, particularly where political and
religious rhetoric is involved, it has no real logical force, because
the contention of ethical objectivism is only that ethical questions
are objectively resolvable in principle -- it does not entail a claim
that the speaker or anyone else is actually in possession of them. This
marks the first of our analogues with science, since the in-principle
objectivity of science coincides with the fact that current scientific
thinking is almost certainly not final or absolute. ethical objectivism
is thus a middle road between subjectivism/relativism on the one hand,
and various absolutisms (such as religious fundamentalism) on the
other.

The final objection, and by far the most philosophically respectable
one, is the objection on that moral rules need to correspond to some
kind of 'queer fact' or 'moral object' which cannot be found.

Natural laws do not correspond in a simplistic one-to-one way with any
empirically detectable object, yet empiricism is relevant to both
supporting and disconfirming natural laws. With this in mind, we should
not rush to reject the objective reality of moral laws on the basis
that there is no 'queer' object for them to stand in one-to-one
correspondence with.

There is, therefore, a semi-detached relationship between natural laws
and facts -- laws are not facts but are not unrelated to facts -- facts
confirm and disconfirm them. There is also a famous dichotomy between
fact and value (where 'value' covers ethics, morality etc). You cannot,
we are told, derive an 'ought' from an 'is'. This is the fact/value
problem.

But, as Hume's argument reminds us, you cannot derive a law from an
isolated observation. Call this the fact/law problem. Now, if the
morality is essentially a matter or ethical rules or laws, might not
the fact/value problem and the law/value problem be at least partly the
same ?

(Note that there seems to be a middle ground here; the English should
can indicate lawfulness without implying either inevitability, like a
natural law, or morality. eg you should move the bishop diagonally in
chess -- but that does not mean you will, or that it is unethical to do
so. It is just against the rules of chess).

RE: computer pain

2006-12-20 Thread Jef Allbright


peterdjones wrote:


Moral and natural laws.


An investigation of natural laws, and, in parallel, a defence 
of ethical objectivism.The objectivity, to at least some 
extent, of science will be assumed; the sceptic may differ, 
but there is no convincing some people).


snip

As ethical objectivism is a work-in-progress 
there are many variants, and a considerable literature 
discussing which is the correct one.


I agree with the thrust of this post and I think there are a few key
concepts which can further clarify thinking on this subject:

(1) Although moral assessment is inherently subjective--being relative
to internal values--all rational agents share some values in common due
to sharing a common evolutionary heritage or even more fundamentally,
being subject to the same physical laws of the universe.

(2) From the point of view of any subjective agent, what is good is
what is assessed to promote the agent's values into the future.

(3) From the point of view of any subjective agent, what is better is
what is assessed as good over increasing scope.

(4) From the point of view of any subjective agent, what is increasingly
right or moral, is decision-making assessed as promoting increasingly
shared values over increasing scope of agents and interactions.

From the foregoing it can be seen that while there can be no objective
morality, nor any absolute morality, it is reasonable to expect
increasing agreement on the relative morality of actions within an
expanding context.  Further, similar to the entropic arrow of time, we
can conceive of an arrow of morality corresponding to the ratcheting
forward of an increasingly broad context of shared values (survivors of
coevolutionary competition) promoted via awareness of increasingly
effective principles of interaction (scientific knowledge of what works,
extracted from regularities in the environment.)

Further, from this theory of metaethics we can derive a practical system
of social decision-making based on (1) increasing fine-grained knowledge
of shared values, and (2) application of increasingly effective
principles, selected with regard to models of probable outcomes in a
Rawlsian mode of broad rather than narrow self-interest.

I apologize for the extremely terse and sparse nature of this outline,
but I wanted to contribute these keystones despite lacking the time to
provide expanded background, examples, justifications, or
clarifications.  I hope that these seeds of thought may contribute to a
flourishing garden both on and offlist.

- Jef



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-20 Thread Brent Meeker


Stathis Papaioannou wrote:



Brent meeker writes:



Stathis Papaioannou wrote:
 Brent meeker writes:
   Evolution explains why we have good and bad, but it doesn't 
explain  why  good and bad feel as they do, or why we *should* care 
about good  and  bad
 That's asking why we should care about what we should care about, 
i.e.  good and bad.  Good feels as it does because it is (or was)  
evolutionarily advantageous to do that, e.g. have sex.  Bad feels as 
 it does because it is (or was) evolutionarily advantageous to not 
do  that, e.g. hold your hand in the fire.  If it felt good you'd do 
it,  because that's what feels good means, a feeling you want to 
have.
  But it is not an absurd question to ask whether something we have 
 evolved to think is good really is good. You are focussing on the  
descriptive aspect of ethics and ignoring the normative.
Right - because I don't think there is an normative aspect in the 
objective sense.


Even if it  could be shown that a certain ethical belief has been 
hardwired into our  brains this does not make the qustion of whether 
the belief is one we  ought to have an absurd one. We could decide 
that evolution sucks and we  have to deliberately flout it in every 
way we can.
But we could only decide that by showing a conflict with something 
else we consider good.


It might not be a  wise policy but it is not *wrong* in the way it 
would be wrong to claim  that God made the world 6000 years ago.


I agree, because I think there is a objective sense in which the world 
is more than 6000yrs old.
 
 beyond following some imperative of evolution. For example, the 
Nazis   argued that eliminating inferior specimens from the gene 
pool would  ultimately  produce a superior species. Aside from 
their irrational  inclusion of certain  groups as inferior, they 
were right: we could  breed superior humans following  Nazi eugenic 
programs, and perhaps  on other worlds evolution has made such  
programs a natural part of  life, regarded by everyone as good. 
Yet most of  us would regard  them as bad, regardless of their 
practical benefits.


 Would we?  Before the Nazis gave it a bad name, eugenics was a 
popular  movement in the U.S. mostly directed at sterilizing 
mentally retarded  people.  I think it would be regarded as bad 
simply because we don't  trust government power to be exercised 
prudently or to be easily  limited  - both practical 
considerations.  If eugenics is practiced  voluntarily, as it is 
being practiced in the U.S., I don't think  anyone will object (well 
a few fundamentalist luddites will).
  What about if we tested every child and allowed only the superior 
ones  to reproduce? The point is, many people would just say this is 
wrong,  regardless of the potential benefits to society or the 
species, and the  response to this is not that it is absurd to hold 
it as wrong (leaving  aside emotional rhetoric).


But people wouldn't *just* say this is wrong. This example is a 
question of societal policy. It's about what *we* will impose on 
*them*.  It is a question of ethics, not good and bad.  So in fact 
people would give reasons it was wrong: Who's gonna say what 
superior means?  Who gets to decide?   They might say, I just think 
it's bad. - but that would just be an implicit appeal to you to see 
whether you thought is was bad too.  Social policy can only be judged 
in terms of what the individual members of society think is good or bad.
I think I'm losing the thread of what we're discussing here.  Are you 
holding that there are absolute norms of good/bad - as in your example 
of eugenics?


Perhaps none of the participants in this thread really disagree. Let me 
see if I can summarise:


Individuals and societies have arrived at ethical beliefs for a reason, 
whether that be evolution, what their parents taught them, or what it 
says in a book believed to be divinely inspired. Perhaps all of these 
reasons can be subsumed under evolution if that term can be extended 
beyond genetics to include all the ideas, beliefs, customs etc. that 
help a society to survive and propagate itself. Now, we can take this 
and formalise it in some way so that we can discuss ethical questions 
rationally:


Murder is bad because it reduces the net happiness in society - 
Utilitarianism


Murder is bed because it breaks the sixth commandment - Judaism and 
Christianity
(interesting that this only no. 6 on a list of 10: God knows his 
priorities)


Ethics then becomes objective, given the rules. The meta-ethical 
explanation of evolution, broadly understood, as generating the various 
ethical systems is also objective. However, it is possible for someone 
at the bottom of the heap to go over the head of utilitarianism, 
evolution, even God and say:
Why should murder be bad? I don't care about the greatest good for the 
greatest number, I don't care if the species dies out, and I think God 
is a bastard and will shout it from hell if sends me there for 

Re: computer pain

2006-12-20 Thread Brent Meeker


1Z wrote:



Stathis Papaioannou wrote:

Brent meeker writes:


 Stathis Papaioannou wrote:
 
 
 
 
  Brent meeker writes:
 
   Evolution explains why we have good and bad, but it doesn't 
explain
  why  good and bad feel as they do, or why we *should* care about 
good

  and  bad
  That's asking why we should care about what we should care about, 
i.e.

  good and bad.  Good feels as it does because it is (or was)
  evolutionarily advantageous to do that, e.g. have sex.  Bad feels as
  it does because it is (or was) evolutionarily advantageous to not do
  that, e.g. hold your hand in the fire.  If it felt good you'd do it,
  because that's what feels good means, a feeling you want to have.
 
  But it is not an absurd question to ask whether something we have
  evolved to think is good really is good. You are focussing on the
  descriptive aspect of ethics and ignoring the normative.

 Right - because I don't think there is an normative aspect in the 
objective sense.


 Even if it
  could be shown that a certain ethical belief has been hardwired 
into our

  brains this does not make the qustion of whether the belief is one we
  ought to have an absurd one. We could decide that evolution sucks 
and we

  have to deliberately flout it in every way we can.

 But we could only decide that by showing a conflict with something 
else we consider good.


 It might not be a
  wise policy but it is not *wrong* in the way it would be wrong to 
claim

  that God made the world 6000 years ago.

 I agree, because I think there is a objective sense in which the 
world is more than 6000yrs old.


  beyond following some imperative of evolution. For example, the 
Nazis
   argued that eliminating inferior specimens from the gene pool 
would

  ultimately  produce a superior species. Aside from their irrational
  inclusion of certain  groups as inferior, they were right: we could
  breed superior humans following  Nazi eugenic programs, and perhaps
  on other worlds evolution has made such  programs a natural part of
  life, regarded by everyone as good. Yet most of  us would regard
  them as bad, regardless of their practical benefits.
 
  Would we?  Before the Nazis gave it a bad name, eugenics was a 
popular
  movement in the U.S. mostly directed at sterilizing mentally 
retarded

  people.  I think it would be regarded as bad simply because we don't
  trust government power to be exercised prudently or to be easily
  limited  - both practical considerations.  If eugenics is practiced
  voluntarily, as it is being practiced in the U.S., I don't think
  anyone will object (well a few fundamentalist luddites will).
 
  What about if we tested every child and allowed only the superior 
ones

  to reproduce? The point is, many people would just say this is wrong,
  regardless of the potential benefits to society or the species, 
and the
  response to this is not that it is absurd to hold it as wrong 
(leaving

  aside emotional rhetoric).

 But people wouldn't *just* say this is wrong. This example is a 
question of societal policy. It's about what *we* will impose on 
*them*.  It is a question of ethics, not good and bad.  So in fact 
people would give reasons it was wrong: Who's gonna say what 
superior means?  Who gets to decide?   They might say, I just think 
it's bad. - but that would just be an implicit appeal to you to see 
whether you thought is was bad too.  Social policy can only be judged 
in terms of what the individual members of society think is good or bad.


 I think I'm losing the thread of what we're discussing here.  Are 
you holding that there are absolute norms of good/bad - as in your 
example of eugenics?


Perhaps none of the participants in this thread really disagree. Let 
me see if I

can summarise:

Individuals and societies have arrived at ethical beliefs for a 
reason, whether that be
evolution, what their parents taught them, or what it says in a book 
believed to be divinely
inspired. Perhaps all of these reasons can be subsumed under 
evolution if that term can
be extended beyond genetics to include all the ideas, beliefs, customs 
etc. that help a
society to survive and propagate itself. Now, we can take this and 
formalise it in some way

so that we can discuss ethical questions rationally:

Murder is bad because it reduces the net happiness in society - 
Utilitarianism


Murder is bed because it breaks the sixth commandment - Judaism and 
Christianity
(interesting that this only no. 6 on a list of 10: God knows his 
priorities)


Ethics then becomes objective, given the rules. The meta-ethical 
explanation of evolution,
broadly understood, as generating the various ethical systems is also 
objective. However,
it is possible for someone at the bottom of the heap to go over the 
head of utilitarianism,

evolution, even God and say:

Why should murder be bad? I don't care about the greatest good for 
the greatest number,
I don't care if the species dies out, and I think God is a bastard 

Re: computer pain

2006-12-20 Thread Brent Meeker


Jef Allbright wrote:


peterdjones wrote:


Moral and natural laws.


An investigation of natural laws, and, in parallel, a defence of 
ethical objectivism.The objectivity, to at least some extent, of 
science will be assumed; the sceptic may differ, but there is no 
convincing some people).


snip

As ethical objectivism is a work-in-progress there are many variants, 
and a considerable literature discussing which is the correct one.


I agree with the thrust of this post and I think there are a few key
concepts which can further clarify thinking on this subject:

(1) Although moral assessment is inherently subjective--being relative
to internal values--all rational agents share some values in common due
to sharing a common evolutionary heritage or even more fundamentally,
being subject to the same physical laws of the universe.

(2) From the point of view of any subjective agent, what is good is
what is assessed to promote the agent's values into the future.

(3) From the point of view of any subjective agent, what is better is
what is assessed as good over increasing scope.

(4) From the point of view of any subjective agent, what is increasingly
right or moral, is decision-making assessed as promoting increasingly
shared values over increasing scope of agents and interactions.


From the foregoing it can be seen that while there can be no objective

morality, nor any absolute morality, it is reasonable to expect
increasing agreement on the relative morality of actions within an
expanding context.  Further, similar to the entropic arrow of time, we
can conceive of an arrow of morality corresponding to the ratcheting
forward of an increasingly broad context of shared values (survivors of
coevolutionary competition) promoted via awareness of increasingly
effective principles of interaction (scientific knowledge of what works,
extracted from regularities in the environment.)

Further, from this theory of metaethics we can derive a practical system
of social decision-making based on (1) increasing fine-grained knowledge
of shared values, and (2) application of increasingly effective
principles, selected with regard to models of probable outcomes in a
Rawlsian mode of broad rather than narrow self-interest.

I apologize for the extremely terse and sparse nature of this outline,
but I wanted to contribute these keystones despite lacking the time to
provide expanded background, examples, justifications, or
clarifications.  I hope that these seeds of thought may contribute to a
flourishing garden both on and offlist.

- Jef


Well said!  I agree almost completely - I'm a little uncertain about (3) and (4) above 
and the meaning of scope.  Together with the qualifications of Peter Jones 
regarding the lack of universal agreement on even the best supported theories of science, 
you have provided a good outline of the development of ethics in a way parallel with the 
scientific development of knowledge.

There's a good paper on the relation facts and values by Oliver Curry which 
bears on many of the above points:

http://human-nature.com/ep/downloads/ep04234247.pdf

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-19 Thread Stathis Papaioannou


Bruno Marchal writes (quoting Brent Meeker):

  Bruno:
  Because ethics and aesthetics modalities are of an higher order than
  arithmetic which can be considered as deeper and/or simpler.
  Classical arithmetical truth obeys classical logic which is the most
  efficient for describing platonia. Good and bad is related with the
  infinite self mirroring of an infinity of universal machines: it is
  infinitely more tricky, and in particular neither classical ethics nor
  aesthetics should be expected to follow classical logic.
 
  That seems unnecessarily complicated.  Good and bad at the personal 
  Whahooh! and Ouch! are easily explained as consequences of 
  evolution and natural selection.
 
 
 
 Here is perhaps a deep disagreement (which could explain others). I can 
 understand that the 3-personal OUCH can easily be explained as a 
 consequences of evolution and natural selection, for example by saying 
 that the OUCH uttered by an animal could attract the attention of its 
 fellows on the presence of a danger, so natural selection can 
 But, and here is the crux of the mind body problem, if such an 
 explanation explains completely the non personal Whahooh/Ouch then it 
 does not explain at all the first personal OUCH. Worst: it makes such 
 a personal feeling completely useless ... And then it makes the very 
 notion of Good and Bad pure non sense.
 Of course platonists, who have grasped the complete reversal (like the 
 neoplatonist Plotinus, etc.),  have no problem here given that natural 
 evolution occur logically well after the platonis true/false, 
 Good/bad, etc. distinction. The personal feeling related to  ouch is 
 logically prior too).

Evolution explains why we have good and bad, but it doesn't explain why 
good and bad feel as they do, or why we *should* care about good and 
bad beyond following some imperative of evolution. For example, the Nazis 
argued that eliminating inferior specimens from the gene pool would ultimately 
produce a superior species. Aside from their irrational inclusion of certain 
groups as inferior, they were right: we could breed superior humans following 
Nazi eugenic programs, and perhaps on other worlds evolution has made such 
programs a natural part of life, regarded by everyone as good. Yet most of 
us would regard them as bad, regardless of their practical benefits.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: computer pain

2006-12-19 Thread Brent Meeker

Stathis Papaioannou wrote:
 
 Bruno Marchal writes (quoting Brent Meeker):
 
 Bruno:
 Because ethics and aesthetics modalities are of an higher order than
 arithmetic which can be considered as deeper and/or simpler.
 Classical arithmetical truth obeys classical logic which is the most
 efficient for describing platonia. Good and bad is related with the
 infinite self mirroring of an infinity of universal machines: it is
 infinitely more tricky, and in particular neither classical ethics nor
 aesthetics should be expected to follow classical logic.
 That seems unnecessarily complicated.  Good and bad at the personal 
 Whahooh! and Ouch! are easily explained as consequences of 
 evolution and natural selection.


 Here is perhaps a deep disagreement (which could explain others). I can 
 understand that the 3-personal OUCH can easily be explained as a 
 consequences of evolution and natural selection, for example by saying 
 that the OUCH uttered by an animal could attract the attention of its 
 fellows on the presence of a danger, so natural selection can 
 But, and here is the crux of the mind body problem, if such an 
 explanation explains completely the non personal Whahooh/Ouch then it 
 does not explain at all the first personal OUCH. Worst: it makes such 
 a personal feeling completely useless ... And then it makes the very 
 notion of Good and Bad pure non sense.
 Of course platonists, who have grasped the complete reversal (like the 
 neoplatonist Plotinus, etc.),  have no problem here given that natural 
 evolution occur logically well after the platonis true/false, 
 Good/bad, etc. distinction. The personal feeling related to  ouch is 
 logically prior too).
 
 Evolution explains why we have good and bad, but it doesn't explain why 
 good and bad feel as they do, or why we *should* care about good and 
 bad 

That's asking why we should care about what we should care about, i.e. good and 
bad.  Good feels as it does because it is (or was) evolutionarily advantageous 
to do that, e.g. have sex.  Bad feels as it does because it is (or was) 
evolutionarily advantageous to not do that, e.g. hold your hand in the fire.  
If it felt good you'd do it, because that's what feels good means, a feeling 
you want to have.

beyond following some imperative of evolution. For example, the Nazis 
 argued that eliminating inferior specimens from the gene pool would 
 ultimately 
 produce a superior species. Aside from their irrational inclusion of certain 
 groups as inferior, they were right: we could breed superior humans following 
 Nazi eugenic programs, and perhaps on other worlds evolution has made such 
 programs a natural part of life, regarded by everyone as good. Yet most of 
 us would regard them as bad, regardless of their practical benefits.

Would we?  Before the Nazis gave it a bad name, eugenics was a popular movement 
in the U.S. mostly directed at sterilizing mentally retarded people.  I think 
it would be regarded as bad simply because we don't trust government power to 
be exercised prudently or to be easily limited  - both practical 
considerations.  If eugenics is practiced voluntarily, as it is being practiced 
in the U.S., I don't think anyone will object (well a few fundamentalist 
luddites will).

Brent Meeker

 
 Stathis Papaioannou
 _
 Be one of the first to try Windows Live Mail.
 http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
  
 


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-19 Thread Bruno Marchal

Le 18-déc.-06, à 20:10, Brent Meeker a écrit :

 It seems to me that consciousness can exist without narrative, and
 without long term memory.
 The question if the amoeba forms memories could depends on the time
 scale. After all amoebas are pluri-molecular mechanism exchanging
 information (through viruses?) in some way. I would not bet on the
 unconsciousness of amoebas on large time scale.

 Then you have adopted some new meaning of consciousness.  If you  
 stretch consciousness to fit every exchange or storage of  
 information then everything in the universe is conscious and we will  
 need to invent a new word to distinguish conscious people from  
 unconscious ones.


I was using the word consciousness in the usual informal sense. I was  
not saying that any information exchange/storage is conscious. I was  
saying that I would not bet that some highly complex exchange/storage  
of information, in some context where self-referential correctness is  
at play (like evolution and self-adaptation) is not conscious. I was  
saying I am open to the idea that some process around us could have a  
consciousness about whioch we have no idea because it operates on a  
different scale than our own. I was not saying that amoebas are  
conscious, but that it would be quick to say for sure that many  
communicating amoebas during millenia are not. I was just doubting  
aloud.

More formally, I think that consciousness is just the interrogative  
belief in a reality. But it is an *instinctive* belief. The  
interrogative aspect, the interrogation mark has a tendency to be  
burried. We are blasé, especially after childhood.

Much more formally. By Godel COMPLeteness theorem, a (first order)  
theory is consistent iff the theory has a model, that is iff there is a  
mathematical structure capable of satisfying the theorems of the  
theory. Like (N, +, *, 0; succ) satisfies Peano Axioms and theorems.
So, extensionally, to say I am consistent is equivalent (from  
outside) with there is a reality (respecting my beliefs/theorems). By  
Godel INCOMPLeteness, if such a reality exists (for me) then I cannot  
prove it exists (that would be a proof of my consistency), so I can  
only hope in such a reality. But that hope is so important for life (by  
accelerating relatively my decisions making ability) that nature has  
buried the interrogation mark of that hope, so that old animal like us  
take reality for granted until Plato recall us it cannot be (and create  
science by the same token). So consciousness is Dt?. In arithmetic it  
is the interrogative *inference* of Consistent(godel-number of 0 =  
0).
Once the machine infer Dt, she can either keep it as an inference about  
itself, or she can take it as a new belief, but then it is a new (and  
provably more efficient machine(*) for which a new B and D, still  
obeying G and G*, can be (re)applied.

Bruno

(*) See Godel's paper on the length of proofs in Martin Davis The  
Undecidable, or Yuri Manin's  book on Mathematical Logic which gives a  
clear proof of Godel's result on the length of proofs (shortened when  
adding undecidable sentences). See the book by Torkel Franzen, which is  
quite a good introduction to Godel incompleteness theorem (perhaps more  
readable than many other book at that level).

Inexhaustibility: A Non-Exhaustive Treatment, Lecture Notes in Logic 16  
(Lecture Notes in Logic, 16) (Paperback)
http://www.amazon.com/Inexhaustibility-Non-Exhaustive-Treatment- 
Lecture-Notes/dp/1568811756

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---


RE: computer pain

2006-12-19 Thread Stathis Papaioannou





Brent meeker writes:

 Evolution explains why we have good and bad, but it doesn't explain why 
 good and bad feel as they do, or why we *should* care about good and 
 bad 


That's asking why we should care about what we should care about, i.e. good and bad.  
Good feels as it does because it is (or was) evolutionarily advantageous to do that, e.g. 
have sex.  Bad feels as it does because it is (or was) evolutionarily advantageous to not 
do that, e.g. hold your hand in the fire.  If it felt good you'd do it, because that's 
what feels good means, a feeling you want to have.


But it is not an absurd question to ask whether something we have evolved to think 
is good really is good. You are focussing on the descriptive aspect of ethics and 
ignoring the normative. Even if it could be shown that a certain ethical belief has been 
hardwired into our brains this does not make the qustion of whether the belief is one 
we ought to have an absurd one. We could decide that evolution sucks and we have 
to deliberately flout it in every way we can. It might not be a wise policy but it is not 
*wrong* in the way it would be wrong to claim that God made the world 6000 years 
ago.


beyond following some imperative of evolution. For example, the Nazis 
 argued that eliminating inferior specimens from the gene pool would ultimately 
 produce a superior species. Aside from their irrational inclusion of certain 
 groups as inferior, they were right: we could breed superior humans following 
 Nazi eugenic programs, and perhaps on other worlds evolution has made such 
 programs a natural part of life, regarded by everyone as good. Yet most of 
 us would regard them as bad, regardless of their practical benefits.


Would we?  Before the Nazis gave it a bad name, eugenics was a popular movement 
in the U.S. mostly directed at sterilizing mentally retarded people.  I think 
it would be regarded as bad simply because we don't trust government power to 
be exercised prudently or to be easily limited  - both practical 
considerations.  If eugenics is practiced voluntarily, as it is being practiced 
in the U.S., I don't think anyone will object (well a few fundamentalist 
luddites will).


What about if we tested every child and allowed only the superior ones to reproduce? 
The point is, many people would just say this is wrong, regardless of the potential benefits 
to society or the species, and the response to this is not that it is absurd to hold it as wrong 
(leaving aside emotional rhetoric).


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-19 Thread Brent Meeker


Stathis Papaioannou wrote:





Brent meeker writes:

 Evolution explains why we have good and bad, but it doesn't explain 
why  good and bad feel as they do, or why we *should* care about good 
and  bad
That's asking why we should care about what we should care about, i.e. 
good and bad.  Good feels as it does because it is (or was) 
evolutionarily advantageous to do that, e.g. have sex.  Bad feels as 
it does because it is (or was) evolutionarily advantageous to not do 
that, e.g. hold your hand in the fire.  If it felt good you'd do it, 
because that's what feels good means, a feeling you want to have.


But it is not an absurd question to ask whether something we have 
evolved to think is good really is good. You are focussing on the 
descriptive aspect of ethics and ignoring the normative. 


Right - because I don't think there is an normative aspect in the objective 
sense.

Even if it 
could be shown that a certain ethical belief has been hardwired into our 
brains this does not make the qustion of whether the belief is one we 
ought to have an absurd one. We could decide that evolution sucks and we 
have to deliberately flout it in every way we can. 


But we could only decide that by showing a conflict with something else we 
consider good.

It might not be a 
wise policy but it is not *wrong* in the way it would be wrong to claim 
that God made the world 6000 years ago.


I agree, because I think there is a objective sense in which the world is more 
than 6000yrs old.

beyond following some imperative of evolution. For example, the Nazis 
 argued that eliminating inferior specimens from the gene pool would 
ultimately  produce a superior species. Aside from their irrational 
inclusion of certain  groups as inferior, they were right: we could 
breed superior humans following  Nazi eugenic programs, and perhaps 
on other worlds evolution has made such  programs a natural part of 
life, regarded by everyone as good. Yet most of  us would regard 
them as bad, regardless of their practical benefits.


Would we?  Before the Nazis gave it a bad name, eugenics was a popular 
movement in the U.S. mostly directed at sterilizing mentally retarded 
people.  I think it would be regarded as bad simply because we don't 
trust government power to be exercised prudently or to be easily 
limited  - both practical considerations.  If eugenics is practiced 
voluntarily, as it is being practiced in the U.S., I don't think 
anyone will object (well a few fundamentalist luddites will).


What about if we tested every child and allowed only the superior ones 
to reproduce? The point is, many people would just say this is wrong, 
regardless of the potential benefits to society or the species, and the 
response to this is not that it is absurd to hold it as wrong (leaving 
aside emotional rhetoric).


But people wouldn't *just* say this is wrong. This example is a question of societal policy. It's about what *we* will impose on *them*.  It is a question of ethics, not good and bad.  So in fact people would give reasons it was wrong: Who's gonna say what superior means?  Who gets to decide?   They might say, I just think it's bad. - but that would just be an implicit appeal to you to see whether you thought is was bad too.  Social policy can only be judged in terms of what the individual members of society think is good or bad. 


I think I'm losing the thread of what we're discussing here.  Are you holding 
that there are absolute norms of good/bad - as in your example of eugenics?

Brent Meeker

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-18 Thread Stathis Papaioannou


Colin Hales writes:

  You have described a way in which our perception may be more than can
  be explained by the sense data. However, how does this explain the
  response
  to novelty? I can come up with a plan or theory to deal with a novel
  situation
  if it is simply described to me. I don't have to actually perceive
  anything. Writers,
  philosophers, mathematicians can all be creative without perceiving
  anything.
 
  Stathis Papaioannou
 
 
 Imaginative processes also use phenoenal consciousness. To have it
 described to you you had to use phenomenal consciousness. Once you dispose
 of PC you are model bound in all ways. You have to have a model to
 generate the novelty! PC pervades the whole process at all levels. Look
 what happens to Marvin. Even if he had someoine tell him there was an
 outide world he'd never know what the data was telling him.

I agree that phenomenal consciousness is no less essential for imaginative 
processes than it is for direct environmental interaction. However, you have 
proposed a mechanism whereby the connection between the brain and the 
object of its perception cannot be modelled because it involves non-local 
effects. 
If that is so, then having something described to you or thinking it up de novo 
bypasses this mechanism: it's just the cogs in your brain turning, eventually 
producing efferent signals which move your vocal cords or your hands. How 
does the brain working on its own escape those who would make a computer model?

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: computer pain

2006-12-18 Thread 1Z


Colin Geoffrey Hales wrote:
 
  Colin,
 
  You have described a way in which our perception may be more than can
  be explained by the sense data. However, how does this explain the
  response
  to novelty? I can come up with a plan or theory to deal with a novel
  situation
  if it is simply described to me. I don't have to actually perceive
  anything. Writers,
  philosophers, mathematicians can all be creative without perceiving
  anything.
 
  Stathis Papaioannou
 

 Imaginative processes also use phenoenal consciousness. To have it
 described to you you had to use phenomenal consciousness.

Cutting-edge physics is creative to a fault, and
quite hard to literally imag-ine as well.

Once you dispose
 of PC you are model bound in all ways. You have to have a model to
 generate the novelty! PC pervades the whole process at all levels. Look
 what happens to Marvin. Even if he had someoine tell him there was an
 outide world he'd never know what the data was telling him.

He can make a good guess.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-18 Thread Bruno Marchal


Le 17-déc.-06, à 21:11, Brent Meeker a écrit :

 If consciousness is the creation of an inner narrative to be stored in 
 long-term memory then there are levels of consciousness.  The amoeba 
 forms no memories and so is not conscious at all. A dog forms memories 
 and even has some understanding of symbols (gestures, words) and so is 
 conscious.  In between there are various degress of consciousness 
 corresponding to different complexity and scope of learning.



Miscellaneous remarks:

It seems to me that consciousness can exist without narrative, and 
without long term memory.
The question if the amoeba forms memories could depends on the time 
scale. After all amoebas are pluri-molecular mechanism exchanging 
information (through viruses?) in some way. I would not bet on the 
unconsciousness of amoebas on large time scale.



 Bruno:
 Because ethics and aesthetics modalities are of an higher order than
 arithmetic which can be considered as deeper and/or simpler.
 Classical arithmetical truth obeys classical logic which is the most
 efficient for describing platonia. Good and bad is related with the
 infinite self mirroring of an infinity of universal machines: it is
 infinitely more tricky, and in particular neither classical ethics nor
 aesthetics should be expected to follow classical logic.

 That seems unnecessarily complicated.  Good and bad at the personal 
 Whahooh! and Ouch! are easily explained as consequences of 
 evolution and natural selection.



Here is perhaps a deep disagreement (which could explain others). I can 
understand that the 3-personal OUCH can easily be explained as a 
consequences of evolution and natural selection, for example by saying 
that the OUCH uttered by an animal could attract the attention of its 
fellows on the presence of a danger, so natural selection can 
But, and here is the crux of the mind body problem, if such an 
explanation explains completely the non personal Whahooh/Ouch then it 
does not explain at all the first personal OUCH. Worst: it makes such 
a personal feeling completely useless ... And then it makes the very 
notion of Good and Bad pure non sense.
Of course platonists, who have grasped the complete reversal (like the 
neoplatonist Plotinus, etc.),  have no problem here given that natural 
evolution occur logically well after the platonis true/false, 
Good/bad, etc. distinction. The personal feeling related to  ouch is 
logically prior too).


Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: computer pain

2006-12-18 Thread Brent Meeker

Bruno Marchal wrote:
 
 Le 17-déc.-06, à 21:11, Brent Meeker a écrit :
 
 If consciousness is the creation of an inner narrative to be stored in 
 long-term memory then there are levels of consciousness.  The amoeba 
 forms no memories and so is not conscious at all. A dog forms memories 
 and even has some understanding of symbols (gestures, words) and so is 
 conscious.  In between there are various degress of consciousness 
 corresponding to different complexity and scope of learning.
 
 
 
 Miscellaneous remarks:
 
 It seems to me that consciousness can exist without narrative, and 
 without long term memory.
 The question if the amoeba forms memories could depends on the time 
 scale. After all amoebas are pluri-molecular mechanism exchanging 
 information (through viruses?) in some way. I would not bet on the 
 unconsciousness of amoebas on large time scale.

Then you have adopted some new meaning of consciousness.  If you stretch 
consciousness to fit every exchange or storage of information then everything 
in the universe is conscious and we will need to invent a new word to 
distinguish conscious people from unconscious ones.

  Bruno:
 Because ethics and aesthetics modalities are of an higher order than
 arithmetic which can be considered as deeper and/or simpler.
 Classical arithmetical truth obeys classical logic which is the most
 efficient for describing platonia. Good and bad is related with the
 infinite self mirroring of an infinity of universal machines: it is
 infinitely more tricky, and in particular neither classical ethics nor
 aesthetics should be expected to follow classical logic.
 That seems unnecessarily complicated.  Good and bad at the personal 
 Whahooh! and Ouch! are easily explained as consequences of 
 evolution and natural selection.
 
 
 
 Here is perhaps a deep disagreement (which could explain others). I can 
 understand that the 3-personal OUCH can easily be explained as a 
 consequences of evolution and natural selection, for example by saying 
 that the OUCH uttered by an animal could attract the attention of its 
 fellows on the presence of a danger, so natural selection can 
 But, and here is the crux of the mind body problem, if such an 
 explanation explains completely the non personal Whahooh/Ouch then it 
 does not explain at all the first personal OUCH. Worst: it makes such 
 a personal feeling completely useless ... And then it makes the very 
 notion of Good and Bad pure non sense.

I took your use of the words ouch and whahooh as referring equally to one's 
inner feelings, and as metaphors for the feelings of inarticulate beings, not 
only as literal expositions.

Brent Meeker


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: computer pain

2006-12-17 Thread Stathis Papaioannou


Brent Meeker writes:

[Colin]
  So I guess my proclaimations about models are all contingent on my own
  view of things...and I could be wrong. Only time will tell. I have good
  physical grounds to doubt that modelling can work and I have a way of
  testing it. So at least it can be resolved some day.
 
[Stathis] 
  I'm not sure of the details of your experiments, but wouldn't the most 
  direct 
  way to prove what you are saying be to isolate just that physical process 
  which cannot be modelled? For example, if it is EM fields, set up an 
  appropriately 
  brain-like configuration of EM fields, introduce some environmental input, 
  then 
  show that the response of the fields deviates from what Maxwell's equations 
  would predict. 

 I don't think Colin is claiming the fields deviate from Maxwell's equations - 
 he says they are good descriptions, they just miss the qualia.
 
 Seems to me it would be a lot simpler to set up some EM fields of various 
 spatial and frequency variation and see if they change your qualia.
 
 Brent Meeker

I'll let Colin answer, but it seems to me he must say that some aspect of brain 
physics deviates from what the equations tell us (and deviates in an 
unpredictable 
way, otherwise it would just mean that different equations are required) to be 
consistent. If not, then it should be possible to model the behaviour of a 
brain: 
predict what the brain is going to do in a particular situation, including 
novel situations 
such as those involving scientific research. Now, it is possible that the model 
will 
reproduce the behaviour but not the qualia, because the actual brain material 
is 
required for that, but that would mean that the model will be a philosophical 
zombie, 
and Colin has said that he does not believe that philosophical zombies can 
exist. 
Hence, he has to show not only that the computer model will lack the 1st person 
experience, but also lack the 3rd person observable behaviour of the real 
thing; 
and the latter can only be the case if there is some aspect of brain physics 
which 
does not comply with any possible mathematical model. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: computer pain

2006-12-17 Thread 1Z


Colin Geoffrey Hales wrote:
 
  I understand your conclusion, that a model of a brain
  won't be able to handle novelty like a real brain,
  but I am trying to understand the nuts and
  bolts of how the model is going to fail. For
  example, you can say that perpetual motion
  machines are impossible because they disobey
  the first or second law of thermodynamics,
  but you can also look at a particular design of such a
  machine  and point out where the moving parts are going
  to slow down due to friction.
 
  So, you have the brain and the model of the brain,
  and you present them both with the same novel situation,
  say an auditory stimulus. They both process the
  stimulus and produce a response in the form of efferent
  impulses which move the vocal cords and produce speech;
  but the brain says something clever while  the computer
  declares that it is lost for words. The obvious explanation
  is that the computer model is not good enough, and maybe
  a better model would  perform better, but I think you would
  say that *no* model, no matter how good, could match the brain.
 
  Now, we agree that the brain contains matter which
  follows the laws of physics.
  Before the novel stimulus is applied the brain
  is in configuration x. The stimulus essentially adds
  energy to the brain in a very specific way, and as a
  result of this the brain undergoes a very complex sequence
  of physical changes, ending up in
  configuration y, in the process outputting energy
  in a very specific way which causes the vocal cords to move.
  The important point is, in the transformations
  x-y the various parts of the brain are just working
  like parts of an elaborate Rube Goldberg mechanism.
  There can be no surprises, because that would be
  magic: two positively charged entities suddenly
  start attracting each other, or
  the hammer hits the pendulum and no momentum
  is transferred. If there is magic -
  actually worse than that, unpredictable magic -
  then it won't be possible to model
  the brain or the Rube Goldberg machine. But, barring magic,
  it should be possible to predict the physical state
  transitions x-y and hence you will know
  what the motor output to the vocal cords will be and
  what the vocal response to the
  novel  stimulus will be.
 
  Classical chaos and quantum uncertainty may make it
  difficult or impossible to
  predict what a particular brain will do on a
  particular day, but they should not be  a theoretical
  impediment to modelling a generic brain which behaves in an
  acceptably brain-like manner. Only unpredictable magical
  effects would prevent that.
 
  Stathis Papaiaonnou

 I get where you're coming from. The problem is, what I am going to say
 will, in your eyes, put the reason into the class of 'magic'. I am quite
 used to it, and don't find it magical at all

 The problem is that the distal objects that are the subject about which
 the brain is informing itself, are literally, physically involved in the
 process.

That is true. It is not clear why it should be a problem.

  You can't model them, because you don't know what they are.

Why not? if the brain is succeeding in informing
itself about them, then it does know what they
are. What else does informing mean? (And
remember that one can supplement perceptual
information with information from instruments,etc).

  All
 you have is sensory measurements and they are local and
 ambiguous...


They are no hopelessly local, because different sensory
feeds can be and are combined, and they are not seriously
ambiguous, because, although illusions and ambiguities
can occur, the sensory system usually succeeds in making a best
guess.

 .that's why you are doing the 'qualia dance' with EM fields -
 to 'cohere' with the external world. This non-locality is the same
 non-locality observed in QM and makes gravity 'action at a distance'
 possible. .

That is wildly specualtive.

  I've been thinking about this for so long I actually have
 the reverse problem now...I find 'locality' really weird! I find 'extent'
 really hard to fathom. The non-locality is also predicted as the solution
 to the 'unity' issue.


 The empirical testing to verify this non-locality is the real target of my
 eventual experimentation. My model and the real chips will behave
 differently, it is predicted, because of the involvement of the 'external
 world' that is not available to the model.

 I hope to be able to 'switch off' the qualia whilst holding eveything else
 the same. The effects on subsequent learning will be indicative of the
 involvement of the qualia in learning. What the external world 'looks
 like' in the brain is 'virtual circuits' - average EM channels (regions of
 low potential that are like a temporary 'wire') down which chemistry can
 flow to alter synaptic weights and rearrange channel positions/rafting in
 the membrane and so on.

 So I guess my proclaimations about models are all contingent on my own
 view of 

Re: computer pain

2006-12-17 Thread 1Z


Brent Meeker wrote:
 Stathis Papaioannou wrote:
 
  Colin Hales writes:
 
  I understand your conclusion, that a model of a brain
  won't be able to handle novelty like a real brain,
  but I am trying to understand the nuts and
  bolts of how the model is going to fail. For
  example, you can say that perpetual motion
  machines are impossible because they disobey
  the first or second law of thermodynamics,
  but you can also look at a particular design of such a
  machine  and point out where the moving parts are going
  to slow down due to friction.
 
  So, you have the brain and the model of the brain,
  and you present them both with the same novel situation,
  say an auditory stimulus. They both process the
  stimulus and produce a response in the form of efferent
  impulses which move the vocal cords and produce speech;
  but the brain says something clever while  the computer
  declares that it is lost for words. The obvious explanation
  is that the computer model is not good enough, and maybe
  a better model would  perform better, but I think you would
  say that *no* model, no matter how good, could match the brain.
 
  Now, we agree that the brain contains matter which
  follows the laws of physics.
  Before the novel stimulus is applied the brain
  is in configuration x. The stimulus essentially adds
  energy to the brain in a very specific way, and as a
  result of this the brain undergoes a very complex sequence
  of physical changes, ending up in
  configuration y, in the process outputting energy
  in a very specific way which causes the vocal cords to move.
  The important point is, in the transformations
  x-y the various parts of the brain are just working
  like parts of an elaborate Rube Goldberg mechanism.
  There can be no surprises, because that would be
  magic: two positively charged entities suddenly
  start attracting each other, or
  the hammer hits the pendulum and no momentum
  is transferred. If there is magic -
  actually worse than that, unpredictable magic -
  then it won't be possible to model
  the brain or the Rube Goldberg machine. But, barring magic,
  it should be possible to predict the physical state
  transitions x-y and hence you will know
  what the motor output to the vocal cords will be and
  what the vocal response to the
  novel  stimulus will be.
 
  Classical chaos and quantum uncertainty may make it
  difficult or impossible to
  predict what a particular brain will do on a
  particular day, but they should not be  a theoretical
  impediment to modelling a generic brain which behaves in an
  acceptably brain-like manner. Only unpredictable magical
  effects would prevent that.
 
  Stathis Papaiaonnou
  I get where you're coming from. The problem is, what I am going to say
  will, in your eyes, put the reason into the class of 'magic'. I am quite
  used to it, and don't find it magical at all
 
  The problem is that the distal objects that are the subject about which
  the brain is informing itself, are literally, physically involved in the
  process. You can't model them, because you don't know what they are. All
  you have is sensory measurements and they are local and
  ambiguousthat's why you are doing the 'qualia dance' with EM fields -
  to 'cohere' with the external world. This non-locality is the same
  non-locality observed in QM and makes gravity 'action at a distance'
  possible. . I've been thinking about this for so long I actually have
  the reverse problem now...I find 'locality' really weird! I find 'extent'
  really hard to fathom. The non-locality is also predicted as the solution
  to the 'unity' issue.
 
  The empirical testing to verify this non-locality is the real target of my
  eventual experimentation. My model and the real chips will behave
  differently, it is predicted, because of the involvement of the 'external
  world' that is not available to the model.
 
  I hope to be able to 'switch off' the qualia whilst holding eveything else
  the same. The effects on subsequent learning will be indicative of the
  involvement of the qualia in learning. What the external world 'looks
  like' in the brain is 'virtual circuits' - average EM channels (regions of
  low potential that are like a temporary 'wire') down which chemistry can
  flow to alter synaptic weights and rearrange channel positions/rafting in
  the membrane and so on.
 
  So I guess my proclaimations about models are all contingent on my own
  view of things...and I could be wrong. Only time will tell. I have good
  physical grounds to doubt that modelling can work and I have a way of
  testing it. So at least it can be resolved some day.
 
  I'm not sure of the details of your experiments, but wouldn't the most 
  direct
  way to prove what you are saying be to isolate just that physical process
  which cannot be modelled? For example, if it is EM fields, set up an 
  appropriately
  brain-like configuration of EM fields, introduce some environmental 

Re: computer pain

2006-12-17 Thread Mark Peaty

Well this is fascinating! I tend to think that Brent's 'simplistic' 
approach of setting up oscillating EM fields of specific frequencies at 
specific locations is more likely to be good evidence of EM involvement 
in qualia, because the victim, I mean experimental subject, can relate 
what is happening. Do it to enough naive subjects and, if their accounts 
of the changes wrought in their experience agree with your predictions, 
you will have provisional verification. Just make sure you have a 
falsifiable prediction first.

On the other hand Colin's project seems out of reach to me. This is 
probably because I don't really understand it. I do not, for example, 
understand how Colin seems to think that we can dispense with the 
concept of representation. I am however very sceptical of all 'quantum' 
mechanical/entanglement theories of consciousness. As far as I can see 
humans are 'classical' in nature, built out of fundamental particles 
like everything else in the universe of course, but we can live and move 
and have our being BECAUSE each one of us, and the major parts which 
compose us, are all big enough to endure over and above the quantum 
uncertainty. So we don't 'flit in and out of existence' like some people 
say. We wake up, go to sleep, dose off at the wrong time, forget what we 
are doing, live through social/cultural descriptions of the world, dream 
and aspire, and sometimes experience amazing insights which can turn our 
lives around. We survive and endure by doing mostly the tried and true 
things we have learned so well that they are deeply ingrained habits. 
Most of what we do, perceive, and think is so stolidly habitual and 
'built-in' that we are almost completely unaware of it; it is fixtures 
and fittings of the mind if you like. It all works for us, and the whole 
social and cultural milieu of economic and personal transactions, 
accounting, appointments, whatever, can happen so successfully BECAUSE 
so much of what we are and do is solidly habitual and predictable. In my 
simplistic view, consciousness is the registration of discrepancy 
between what the brain has predicted compared to what actually happened. 
Everything else, the bulk of what constitutes the mind in effect, is the 
ceaseless evoking, selecting, ignoring or suppressing, storing, 
amalgamating or splitting of the dynamic logical structures which 
represent our world, and without which we are just lumps of meat. These 
dynamic logical structures actually EXIST during their evocation. [And 
this is why there is 'something it is like to be ...']

This may seem like a very boring view of things but I think now there is 
an amazing amount of explanation already available concerning human 
experience. I am not saying there is nothing new to discover, far from 
it, just that the continuous denial that most of the pieces of the 
puzzle are already exposed and arranged in the right order is not helpful.

What ought to be clear to everybody is that our awareness of being here, 
of being anything in fact, entails a continuous process of 
self-referencing. It entails a continuous process of locating self in 
one's world. This self-referencing is always inherently partial and 
incomplete, but unless this incompleteness itself is explicitly 
represented, we are not aware of it. We are only ever aware of 
relationships explicitly represented and being explicitly represented 
entails inclusion of representation of at least some aspects of how 
whatever it is, is, was, will be, or might become, causally connected to 
oneself. When we perceive or imagine things, it is always from a view 
point, listening point, or at a point of contact. The 'location' of 
something or someone is an intrinsic part of its or their identity, and 
the key element of location as such is in relation to oneself or in 
relation to someone who we ourselves identify with; they are extensions 
of ourselves.

I'll leave that there for the moment. I just want to add that I believe 
Colin Hales is right in focussing on the ability of humans to do 
science. I look at that more from the point of view that being able to 
do science, and being able to perceive and understand entropy - even if 
it is only grasping where crumbs and fluff balls come from -  are what 
allow us to know that we are NOT in some kind of computer generated 
matrix. We live in a real, open universe that exists independently of 
each of us but yet is incomplete without us.
 
Regards
Mark Peaty  CDES
[EMAIL PROTECTED]
http://www.arach.net.au/~mpeaty/
 


Brent Meeker wrote:
 Stathis Papaioannou wrote:
   
 Colin Hales writes:

 
 I understand your conclusion, that a model of a brain
 won't be able to handle novelty like a real brain,
 but I am trying to understand the nuts and
 bolts of how the model is going to fail. For
 example, you can say that perpetual motion
 machines are impossible because they disobey
 the first or second law of thermodynamics,
 but you can also look at a 

Re: computer pain

2006-12-17 Thread 1Z


Colin Geoffrey Hales wrote:
 Stathis wrote:
 I can understand that, for example, a computer simulation of a storm is
 not a storm, because only a storm is a storm and will get you wet. But
 perhaps counterintuitively, a model of a brain can be closer to the real
 thing than a model of a storm. We don't normally see inside a person's
 head, we just observe his behaviour. There could be anything in there - a
 brain, a computer, the Wizard of Oz - and as long as it pulled the
 person's strings so that he behaved like any other person, up to and
 including doing scientific research, we would never know the difference.

 Now, we know that living brains can pull the strings to produce normal
 human behaviour (and consciousness in the process, but let's look at the
 external behaviour for now). We also know that brains follow the laws of
 physics: chemistry, Maxwell's equations, and so on. Maybe we don't
 *understand* electrical fields in the sense that it may feel like
 something to be an electrical field, or in some other as yet unspecified
 sense, but we understand them well enough to predict their physical effect
 on matter. Hence, although it would be an enormous task to gather the
 relevant information and crunch the numbers in real time, it should be
 possible to predict the electrical impulses that come out of the skull to
 travel down the spinal cord and cranial nerves and ultimately pull the
 strings that make a person behave like a person. If we can do that, it
 should be possible to place the machinery which does the predicting inside
 the skull interfaced with the periphery so as to take the brain's place,
 and no-one would know the difference because it would behave just like the
 original.

 At which step above have I made a mistake?

 Stathis Papaioannou

 ---
 I'd say it's here...

 and no-one would know the difference because it would behave just like
 the original

 But for a subtle reason.

 The artefact has to be able to cope with exquisite novelty like we do.
 Models cannot do this because as a designer you have been forced to define
 a model that constrains all possible novelty to be that which fits your
 model for _learning_.

If the model has been reverse-engineered from how
the nervous system works (ie, transparent box, not black box), it will
have the learning abilities of NS -- even if we don't know what they
are.

 Therein lies the fundamental flaw. Yes... at a given
 level of knowledge you can define how to learn new things within the
 knowledge framework. But when it comes to something exquisitely novel, all
 that will happen is that it'll be interpreted into the parameters of how
 you told it to learn things... this will impact in a way the artefact
 cannot handle. It will behave differently and probably poorly.

 It's the zombie thing all over again.

 It's not _knowledge_ that matters. it's _learning_ new knowledge. That's
 what functionalism fails to handle. Being grounded in a phenomenal
 representation of the world outside is the only way to handle arbitrary
 levels of novelty.

That remains to be seen.

  No phenomenal representation? = You are model-bound
 and grounded, in effect, in the phenomenal representation of your
 model-builders, who are forced to predefine all novelty handling in an I
 don't know that functional module. Something you cannot do without
 knowing everything a-priori! If you already know that you are god so why
 are you bothering?

So long as you can peak into a system, you can functionally duplicate
it
without knowing how
it behaves under all circumstances. I can rewrite
the C code

double f(double x, double.y)
{
   return 4.2+ sin(x) - exp(cos(y), 9.7);
}

in Pascal, although I couldn't tell you offhand
what the output is for x=0.77 , y=0.33


 Say you bring an artefact X into existence. X may behave exactly like a
 human Y in all the problem domains you used to define you model. Then you
 expose both to novelty nobody has seen, including you and that is
 where the two will differ. The human Y will do better every time. You
 can't program qualia. You have to have them and you can't do without them
 in a 'general intelligence' context.

 Here I am on a sat morning...proving I have no life, yet again! :-)
 
 Colin Hales


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-17 Thread James N Rose

Just to throw a point of perspective into this
conversation about mimicking qualia.

I posed a thematic question in my 1992 opus
Understanding the Integral Universe.

 What of a single celled animus like an amoeba or paramecium?
 Does it 'feel' itself?  Does it sense the subtle variations
 in its shape as it bumps around in its liquid world?  Does it
 somehow note changes in water pressure around it?  Is it
 always hungry?  What drives a single celled creature to eat?
 What need, if any is fulfilled?  Is it due to an internal
 pressure gradient in it's chemical metabolism? Is there a
 resilience to its boundary that not only determines its
 particular shape, whether amoebic or firm, but that variations
 in that boundary re-distribute pressures through its form to
 create a range of responsive actions? And, because it is
 coherent for that life form, is this primal consciousness?
 How far down into the structure of existence can we reasonably
 extrapolate this? An atom's electron cloud responds and interacts
 with its level of environment, but is this consciousness? We
 cannot personify, and therefore mystify, all kinetic functions
 as different degrees of consciousness; at least not at this point.
 Neither, can we specify with any certainty a level where
 consciousness suddenly appears, where there was none before.
 UIU(c)ROSE 1992 ; 02)Intro section.

http://www.ceptualinstitute.com/uiu_plus/UIUcomplete11-99.htm


Pain is a net-collective qualia, an 'other-tier' cybernetic 
emerged phenomenon.  But it is -not unrelated- to phenomena
like basic EM field changes and 'system's experiences' in those
precursive tiers.

Also, pain (an aspect of -consciousness-), has to be understood
in regard to the panorama of 'kinds-of-sentience' that any given 
system/organism has, embodies, utilizes or enacts.  

In other words, it would be wrong to dismiss the presence of
'pain' in autonomic nervous systems, simply because the
cognitive nervous system is 'unaware' of the signals or
the distress situation generating them.

If one wants to 'define' pain sentience as a closed marker,
and build contrived systems that match the defined conditions
and criteria, that is one thing - and acceptable for what it
is.  But if the 'pain' is a coordination of generalized
engagements and reactions, then a different set of 
design standards needs to be considered/met.

Vis a vis  -this- reasoning:

http://www.ceptualinstitute.com/uiu_plus/uiu04start.htm 



Jamie Rose
Ceptual Institute
cognating on a sunday morning
2006/12/17


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-17 Thread Brent Meeker

James N Rose wrote:
 Just to throw a point of perspective into this
 conversation about mimicking qualia.
 
 I posed a thematic question in my 1992 opus
 Understanding the Integral Universe.
 
  What of a single celled animus like an amoeba or paramecium?
  Does it 'feel' itself?  Does it sense the subtle variations
  in its shape as it bumps around in its liquid world?  Does it
  somehow note changes in water pressure around it?  Is it
  always hungry?  What drives a single celled creature to eat?
  What need, if any is fulfilled?  Is it due to an internal
  pressure gradient in it's chemical metabolism? Is there a
  resilience to its boundary that not only determines its
  particular shape, whether amoebic or firm, but that variations
  in that boundary re-distribute pressures through its form to
  create a range of responsive actions? And, because it is
  coherent for that life form, is this primal consciousness?
  How far down into the structure of existence can we reasonably
  extrapolate this? An atom's electron cloud responds and interacts
  with its level of environment, but is this consciousness? We
  cannot personify, and therefore mystify, all kinetic functions
  as different degrees of consciousness; at least not at this point.
  Neither, can we specify with any certainty a level where
  consciousness suddenly appears, where there was none before.
  UIU(c)ROSE 1992 ; 02)Intro section.

If consciousness is the creation of an inner narrative to be stored in 
long-term memory then there are levels of consciousness.  The amoeba forms no 
memories and so is not conscious at all. A dog forms memories and even has some 
understanding of symbols (gestures, words) and so is conscious.  In between 
there are various degress of consciousness corresponding to different 
complexity and scope of learning.

 
 http://www.ceptualinstitute.com/uiu_plus/UIUcomplete11-99.htm
 
 
 Pain is a net-collective qualia, an 'other-tier' cybernetic 
 emerged phenomenon.  But it is -not unrelated- to phenomena
 like basic EM field changes and 'system's experiences' in those
 precursive tiers.
 
 Also, pain (an aspect of -consciousness-), has to be understood
 in regard to the panorama of 'kinds-of-sentience' that any given 
 system/organism has, embodies, utilizes or enacts.  
 
 In other words, it would be wrong to dismiss the presence of
 'pain' in autonomic nervous systems, simply because the
 cognitive nervous system is 'unaware' of the signals or
 the distress situation generating them.

This seems to depend on whether you define pain to be the conscious experience 
of pain, or you allow that the bodily reaction is evidence of pain in some more 
general sense.  I think Stathis posed the question in terms of conscious 
experience.  There's really no doubt that one can create and artificial system 
that reacts to distress; as  in my example of a modern aircraft.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-17 Thread Colin Geoffrey Hales


 I'm not sure of the details of your experiments, but wouldn't the most
 direct way to prove what you are saying be to isolate just
 that physical process
 which cannot be modelled? For example, if it is EM fields, set up an
 appropriately
 brain-like configuration of EM fields, introduce some environmental input,
 then
 show that the response of the fields deviates from what Maxwell's
 equations
 would predict.

 Stathis Papaioannou

I don't expect any deviation from Maxwell's equations. There's nothing
wrong with them. It's just that they are merely a very good representation
of a surface behaviour of the perceived universe in a particular context.
Just like QM. But it's only the surface. The universe is not made of EM or
QM or atoms or space. All these things are appearances. and it's what they
are all actually made of with is what delivers its appearances.

It's a pretty simple idea and it's been around for 300 years (and it's not
a substance dualism!). The paper I'm writing at the moment (nearly
finished) is about how this cultural delusion that the universe is made of
our models pervades the low-level physical science. It's quite stark...the
application of situated cognition to knowledge is quite pervasive. You can
take a vertical slice all the way through the entire epistemological tree
from social sciences down through psychology..cognitive
science..ecology..ethology..anthropology ||
neuroscience...chemistry..physics. The || is the sudden break where
situated cognition matters and where physics, in particular cosmology is
almost pathologically intent on the surgical excision of the scientist
from the universe. Situated cognition applied to metascience at the level
of physics is simply absent.

You can see it in the desperate drive to make sense of QM maths, as if the
universe is made of it...that the only way that any sense can be made of
it is to write complex stories about infinite numbers of universes, all of
which are somehow explanatory of the weirdness of the maths, rather then
deal with what the universe is actually made ofwhen right in front of
all of them is the perfect way out...start talking about what universes
must be made of in order that it can omplement scientists that have
perceptionto realise that the maths of empirical laws is just a model
of the stuff, not the stuff.

Cosmologists are the key. They have some sort of mass fantasy going about
the mathematics they use. Totally unfounded assumptions pervade their
craft - far worse than any assumption that the universe is not made of
idealsised maths...the thing that gets labeled erroneously 'metaphysics'
and eschewed.

I have done a cartoon representation of a cosmologist made of stuff in a
unoiverse of stuff staring at the cosmos wondering where all teh stuff is,
when the fact of be able to stare _at all_ is telling him about the deep
nature of of the cosmos. Poor little deluded cosmologist.

There's nothing wrong with Maxwell's equations. In fact there's nothing
wrong with any empirical laws. The problem is us...

cheers,

colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-17 Thread Colin Geoffrey Hales

Stathis said

 I'll let Colin answer, but it seems to me he must say that some aspect of
 brain
 physics deviates from what the equations tell us (and deviates in an
 unpredictable
 way, otherwise it would just mean that different equations are required)
 to be
 consistent. If not, then it should be possible to model the behaviour of a
 brain:
 predict what the brain is going to do in a particular situation, including
 novel situations
 such as those involving scientific research. Now, it is possible that the
 model will
 reproduce the behaviour but not the qualia, because the actual brain
 material is
 required for that, but that would mean that the model will be a
 philosophical zombie,
 and Colin has said that he does not believe that philosophical zombies can
 exist.
 Hence, he has to show not only that the computer model will lack the 1st
 person
 experience, but also lack the 3rd person observable behaviour of the real
 thing;
 and the latter can only be the case if there is some aspect of brain
 physics which
 does not comply with any possible mathematical model.

 Stathis Papaioannou

Exactly rightexcept for the bit where you talk about 'deviation from
the model'. I expect the EM model to be perfectly right - indeed MUST be
right or I can't do the experiment because the modelling I do will help me
design the chips...it must be right or they won't work. It's just that the
models don't deliver all the result - you have to BE the chips to get
the whole picture.

What is missing from the model, seamlessly and irrevocably and
instrinsically... is that it says nothing about the first person
perspective. You cannot model the first person perspective by definition,
because every first person perspective is different! The 'fact' of the
existence of the first person is the invariant, however.

SoAll the models are quite right and accurate, but are inherently
third person descriptions of 'the stuff', not 'the stuff'. When you be
'the stuff' under the right circumstances there's more to the description.
And EVERYTHING gets to 'be'...ie..is forced, implicitly to uniquely be
somewhere in the universe and inherits all the properties of that act,
NONE of which is delivered by empirical laws, which are constructed under
conditions designed specifically to throw out that
perspective...and...what's worse...it does it by verifying the laws using
the FIRST PERSON...to do all scientific measurements...not only that, if
you don't do it with the first person (measurement/experimental
observation grounded in the first person of the scientist) you get told
you are not doing science!

How screwed up is that!

My planned experiment makes chips and on those chips will be probably 4
intrinsically intermixed 'scientists', all of whom can share each other's
scientific evidence = first person experiences...whilst they do 'dumb
science' like test a hypothesis H1 = is the thing there?. By fiddling
about with the configuration of the scientists you can create
circumstances where the only way they can agree/disagree is because of the
first person perspectiveand the whole thing will obey Maxwell's
equations perfectly well from the outside. Indeed the 'probes' I will
embed will measure field effects in-situ that are supposed to do what
Maxwell's equations says.

cheers,

colin hales





--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-17 Thread Colin Geoffrey Hales

Stathis said
SNIP
 and Colin has said that he does not believe that philosophical zombies
can exist.
 Hence, he has to show not only that the computer model will lack the 1st
person
 experience, but also lack the 3rd person observable behaviour of the
real thing;
 and the latter can only be the case if there is some aspect of brain
physics which
 does not comply with any possible mathematical model.

 Stathis Papaioannou

I just thought of a better way of explaining 'deviation'.

Maxwell's equations are not 'unique' in the sense that there are an
infinite number of different charge configurations that will produce the
same field congurations around some surface. This is a very old
resultwas it Poisson who said it? can't remember.

Anyway I will be presenting different objects to my 'chip scientists',
but I will be presenting them in such a way as the sensory measurement is
literally identical.

What I expect to happen is that the field configuration I find emerging in
the guts of the chips will be different, depending on the object, even
though the sensory measurement is identical. The different field
configurations will correspond to the different objects. That is what
subjective experience will look like from the outside.

The chip's 'solution' to the charge cnfiguration will take up a
configuration based on the non-locality...hence the scientists will report
different objects, even when their sensory measurement is identical, and
it is the only apparent access they have to the object (to us).

I think that's more like what you are after... there's no failure to
obey maxwell's equations, but their predictions as to charge
configuration is not a unique solution. The trick is to realise that the
sensory maeasurement has to be there in order that _any_ solution be
found, not a _particular_ solution.

pretty simple really. does that make more sense?

cheers

colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-17 Thread James N Rose



Brent Meeker wrote:
 
 If consciousness is the creation of an inner narrative
 to be stored in long-term memory then there are levels
 of consciousness.  The amoeba forms no memories and so
 is not conscious at all. A dog forms memories and even
 has some understanding of symbols (gestures, words) and
 so is conscious.  In between there are various degress
 of consciousness corresponding to different complexity
 and scope of learning.

That notion may fit comfortably with your presumptive
ideas about 'memory' -- computer stored, special-neuron
stored, and similar.  But the universe IS ITSELF 'memory
storage' from the start.  Operational rules of performance
-- the laws of nature, so to speak -- are 'memory', and 
inform EVERY organization of action-appropriateness.  Its 
'memory' of the longest-term kind actually. 

Amoebic behavior embodies more than stimulus-response
actions - consistent with organismic plan 'must eat';
but less than your criterial state of sentient awareness
 - consistent with 'plan dynamics/behaviors'.

The rut that science is in, is presumption that 'our sentience
is 'the only' sentience form' and is the gold standard for 
any/all aware-behavior activity.

Sentience better fits a model of spectrum and degrees; rather
than not-extant / suddenly-extant.

Correct analoging is more challenging with the former, which is
why no AI afficionados want to give up the Cartesian Split way
of thinking and dealing with things - trying make square 'wheels'
roll in the long run.

Jamie 
 
  http://www.ceptualinstitute.com/uiu_plus/UIUcomplete11-99.htm


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-17 Thread Stathis Papaioannou


Colin,

If there is nothing wrong with the equations, it is always possible to predict 
the
behaviour of any piece of matter, right? And living matter is still matter, 
which 
obeys all of the physical laws all of the time, right? It appeared from your 
previous 
posts that you would disagree with this and predict that living matter would 
sometimes 
do surprising, unpredictable things. In that case, your theory is logically 
consistent, 
but you have to find evidence for it, and it would be easier to tease out the 
essential 
unpredictable physical elements and test them in a physics lab.

Stathis 



 Date: Mon, 18 Dec 2006 07:17:10 +1100
 From: [EMAIL PROTECTED]
 Subject: RE: computer pain
 To: everything-list@googlegroups.com
 
 
 
  I'm not sure of the details of your experiments, but wouldn't the most
  direct way to prove what you are saying be to isolate just
  that physical process
  which cannot be modelled? For example, if it is EM fields, set up an
  appropriately
  brain-like configuration of EM fields, introduce some environmental input,
  then
  show that the response of the fields deviates from what Maxwell's
  equations
  would predict.
 
  Stathis Papaioannou
 
 I don't expect any deviation from Maxwell's equations. There's nothing
 wrong with them. It's just that they are merely a very good representation
 of a surface behaviour of the perceived universe in a particular context.
 Just like QM. But it's only the surface. The universe is not made of EM or
 QM or atoms or space. All these things are appearances. and it's what they
 are all actually made of with is what delivers its appearances.
 
 It's a pretty simple idea and it's been around for 300 years (and it's not
 a substance dualism!). The paper I'm writing at the moment (nearly
 finished) is about how this cultural delusion that the universe is made of
 our models pervades the low-level physical science. It's quite stark...the
 application of situated cognition to knowledge is quite pervasive. You can
 take a vertical slice all the way through the entire epistemological tree
 from social sciences down through psychology..cognitive
 science..ecology..ethology..anthropology ||
 neuroscience...chemistry..physics. The || is the sudden break where
 situated cognition matters and where physics, in particular cosmology is
 almost pathologically intent on the surgical excision of the scientist
 from the universe. Situated cognition applied to metascience at the level
 of physics is simply absent.
 
 You can see it in the desperate drive to make sense of QM maths, as if the
 universe is made of it...that the only way that any sense can be made of
 it is to write complex stories about infinite numbers of universes, all of
 which are somehow explanatory of the weirdness of the maths, rather then
 deal with what the universe is actually made ofwhen right in front of
 all of them is the perfect way out...start talking about what universes
 must be made of in order that it can omplement scientists that have
 perceptionto realise that the maths of empirical laws is just a model
 of the stuff, not the stuff.
 
 Cosmologists are the key. They have some sort of mass fantasy going about
 the mathematics they use. Totally unfounded assumptions pervade their
 craft - far worse than any assumption that the universe is not made of
 idealsised maths...the thing that gets labeled erroneously 'metaphysics'
 and eschewed.
 
 I have done a cartoon representation of a cosmologist made of stuff in a
 unoiverse of stuff staring at the cosmos wondering where all teh stuff is,
 when the fact of be able to stare _at all_ is telling him about the deep
 nature of of the cosmos. Poor little deluded cosmologist.
 
 There's nothing wrong with Maxwell's equations. In fact there's nothing
 wrong with any empirical laws. The problem is us...
 
 cheers,
 
 colin
 
 
 
  

_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: computer pain

2006-12-17 Thread Stathis Papaioannou


Colin,

I think there is a logical contradiction here. You say that the physical models 
do, in fact, explain the 3rd person observable behaviour of a physical system. 
A brain is a physical system with 3rd person observable behaviour. Therefore, 
the models *must* predict *all* of the third person observable behaviour of 
a brain. When a person is handed a complex problem to solve, scratches his head 
and chews his pencil, then writes down his proposed solution to the problem, 
then 
that is definitely 3rd person observable behaviour, and it is definitely due to 
the 
motion of matter which perfectly follows physical laws. So you can in theory 
build 
a model which will predict what the person is going to write down, or at least 
the 
sort of thing a real person might write down, given that classical chaos may 
make 
it impossible to predict what a particular person will do on a particular day.

Now, you would no doubt say that the model will not experience the qualia. 
That's 
OK, but then the model will effectively be a zombie that can behave just like a 
person yet lack phenomenal consciousness, which you don't beleive is possible. 
The only way you can retain this belief consistently is if the model would 
*not* be 
able to predict the 3rd person behaviour of a real person. And the only way 
that is 
possible is if there is some aspect of the physics in the brain which is 
inherently 
unpredictable.

Stathis Papaioannou





 Date: Mon, 18 Dec 2006 07:42:38 +1100
 From: [EMAIL PROTECTED]
 Subject: RE: computer pain
 To: everything-list@googlegroups.com
 
 
 Stathis said
 
  I'll let Colin answer, but it seems to me he must say that some aspect of
  brain
  physics deviates from what the equations tell us (and deviates in an
  unpredictable
  way, otherwise it would just mean that different equations are required)
  to be
  consistent. If not, then it should be possible to model the behaviour of a
  brain:
  predict what the brain is going to do in a particular situation, including
  novel situations
  such as those involving scientific research. Now, it is possible that the
  model will
  reproduce the behaviour but not the qualia, because the actual brain
  material is
  required for that, but that would mean that the model will be a
  philosophical zombie,
  and Colin has said that he does not believe that philosophical zombies can
  exist.
  Hence, he has to show not only that the computer model will lack the 1st
  person
  experience, but also lack the 3rd person observable behaviour of the real
  thing;
  and the latter can only be the case if there is some aspect of brain
  physics which
  does not comply with any possible mathematical model.
 
  Stathis Papaioannou
 
 Exactly rightexcept for the bit where you talk about 'deviation from
 the model'. I expect the EM model to be perfectly right - indeed MUST be
 right or I can't do the experiment because the modelling I do will help me
 design the chips...it must be right or they won't work. It's just that the
 models don't deliver all the result - you have to BE the chips to get
 the whole picture.
 
 What is missing from the model, seamlessly and irrevocably and
 instrinsically... is that it says nothing about the first person
 perspective. You cannot model the first person perspective by definition,
 because every first person perspective is different! The 'fact' of the
 existence of the first person is the invariant, however.
 
 SoAll the models are quite right and accurate, but are inherently
 third person descriptions of 'the stuff', not 'the stuff'. When you be
 'the stuff' under the right circumstances there's more to the description.
 And EVERYTHING gets to 'be'...ie..is forced, implicitly to uniquely be
 somewhere in the universe and inherits all the properties of that act,
 NONE of which is delivered by empirical laws, which are constructed under
 conditions designed specifically to throw out that
 perspective...and...what's worse...it does it by verifying the laws using
 the FIRST PERSON...to do all scientific measurements...not only that, if
 you don't do it with the first person (measurement/experimental
 observation grounded in the first person of the scientist) you get told
 you are not doing science!
 
 How screwed up is that!
 
 My planned experiment makes chips and on those chips will be probably 4
 intrinsically intermixed 'scientists', all of whom can share each other's
 scientific evidence = first person experiences...whilst they do 'dumb
 science' like test a hypothesis H1 = is the thing there?. By fiddling
 about with the configuration of the scientists you can create
 circumstances where the only way they can agree/disagree is because of the
 first person perspectiveand the whole thing will obey Maxwell's
 equations perfectly well from the outside. Indeed the 'probes' I will
 embed will measure field effects in-situ that are supposed to do what
 Maxwell's equations

  1   2   >