Re: computer pain

2006-12-13 Thread Brent Meeker

James N Rose wrote:
 Stathis,
 
 The reason for lack of responses is that your idea
 goes directly to illuminating why AI systems - as 
 promoulgated under current designs of software
 running in hardware matrices - CANNOT emulate living
 systems.  It an issue that AI advocates intuitively
 and scrupulously AVOID.
 
 Pain in living systems isn't just a self-sensor
 of proper/improper code functioning, it is an embedded
 registration of viable/disrupted matrix state.
 
 And that is something that no current human contrived 
 system monitors as a CONCURRENT property of software.
 
 For example, we might say that central processors
 regularly 'display pain' .. that we designers/users
 recognize as excess heat .. that burn out mother boards.
 The equipment 'runs a high fever', in other words.
 
 But where living systems are multiple functioning systems
 and have internal ways of guaging and reacting locally and 
 biochemically vis a vis both to the variance and retaining
 sufficient good-operations while bleeding off 'fever',
 hardware systems have no capacity to morph or adapt
 itself structurally and so keep on burning up or wait
 for external aware-structures to command them to stop
 operating for a while and let the equipment cool down.
 
 I maintain that living systems are significantly designed where
 hardware IS software, and so have a capacity for local
 adaptive self-sensitivity, that human 'contrived' HW/SW systems
 don't and mostly .. can't.
 
 Jamie Rose 
 
 
 Stathis Papaioannou wrote:
 No responses yet to this question. It seems to me a straightforward
 consequence of computationalism that we should be able to write a program
 which, when run, will experience pain, and I suspect that this would be a
 substantially simpler program than one demonstrating general intelligence. It
 would be very easy to program a computer or build a robot that would behave
 just like a living organism in pain, but I'm not sure that this is nearly 
 enough to
 ensure that it is in fact experiencing pain. Any ideas, or references to 
 sources
 that have considered the problem?

 Stathis Papaioannou

I would say that many complex mechanical systems react to pain in a way 
similar to simple animals.  For example, aircraft have automatic shut downs and 
fire extinguishers.  They can change the flight controls to reduce stress on 
structures.  Whether they feel this pain is a different question.  I think 
they feel it if they incorporate it into a narrative to which values are 
attached for purposes of learning (Don't do that again, it hurts.).  But 
that's my theory of qualia - a speculative one.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Hello all - My Theory of Everything

2006-12-13 Thread William


Russell Standish schreef:

 On Mon, Dec 11, 2006 at 03:26:59PM -0800, William wrote:
 
   If the universe is computationallu simulable, then any universal
   Turing machine will do for a higher hand. In which case, the
   information needed is simply the shortest possible program for
   simulating the universe, the length of which by definition is the
   information content of the universe.
 
  What I meant to compare is 2 situations (I've taken an SAS doing the
  simulations for now although i do not think it is required):
 
  1) just our universe A consisting of minimal information
  2) An interested SAS in another universe wants to simulate some
  universes; amongst which is also universe A, ours.
 
  Now we live in universe A; but the question we can ask ourselves is if
  we live in 1) or 2). (Although one can argue there is no actual
  difference).
 
  Nevertheless, my proposition is that we live in 1; since 2 does exist
  but is less probable than 1.
 
  information in 1 = inf(A)
  information in 2 = inf(simulation_A) + inf(SAS) + inf(possible other
  stuff) = inf(A) + inf(SAS) + inf(possible other stuff)  inf(A)
 

 You're still missing the point. If you sum over all SASes and other
 computing devices capable of simulating universe A, the probability of
 being in a simulation of A is identical to simply being in universe A.

 This is actually a theorem of information theory, believe it or not!

I think I'm following your reasoning here, this theorem could also be
used to prove that any probability distribution for universes, which
gives a lower or equal probability to a system with fewer information;
must be wrong. Right ?

But in this case, could one not argue that there is only a small number
(out of the total) of higher universes containing an SAS, and then
rephrase the statement to we are not being simulated by another SAS ?


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Hello all - My Theory of Everything

2006-12-13 Thread Russell Standish

On Wed, Dec 13, 2006 at 09:14:36AM -, William wrote:
 
 I think I'm following your reasoning here, this theorem could also be
 used to prove that any probability distribution for universes, which
 gives a lower or equal probability to a system with fewer information;
 must be wrong. Right ?

Essentially that is the Occam razor theorem. Simpler universes have
higher probability.

 
 But in this case, could one not argue that there is only a small number
 (out of the total) of higher universes containing an SAS, and then
 rephrase the statement to we are not being simulated by another SAS ?
 

By higher I gather you mean more complex. But I think you are
implicitly assuming that a more complex universe is needed to simulate
this one, which I think is wrong. All that is needed is Turing
completeness, which even very simple universes have (for instance
Conway's Game of Life).

Cheers

PS - I'm off tomorrow for the annual family pilgrimage, so I'll be
rather quiet on this list for the next month.


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-13 Thread Stathis Papaioannou


Jamie,

I basically agree with your appraisal of the differences between living 
brains and digital computers. However, it should be possible for a general 
purpose computer to emulate the behaviour of a biological system in software. 
After all, biological systems are just comprised of matter following the laws 
of 
physics, which are well understood and deterministic at the size scales of 
interest. 
When it comes to neural tissue, the emulation should be able to replace the 
original 
provided that it is run on sufficiently fast hardware and has appropriate 
interfaces for 
input and output.

While it would be extremely difficult to emulate a particular human brain (as 
in 
mind uploading), it should be easier to emulate a simplified generic brain, 
and easier 
again to emulate a single simplified perceptual function, such as pain. This 
means that 
it should be possible to store on a hard disk lines of code which, when run on 
a PC, 
will result in the program experiencing pain; perhaps excruciating pain beyond 
what 
humans can imagine, if certain parameters in the program are appropriately 
chosen. 
What might a simple example of such code look like? Should we try to determine 
what 
the painful programs are as a matter of urgency, in order to avoid using them 
in 
subroutines in other programs?

Stathis Papaioannou


 Date: Tue, 12 Dec 2006 23:19:05 -0800
 From: [EMAIL PROTECTED]
 To: everything-list@googlegroups.com
 Subject: Re: computer pain
 
 
 Stathis,
 
 The reason for lack of responses is that your idea
 goes directly to illuminating why AI systems - as 
 promoulgated under current designs of software
 running in hardware matrices - CANNOT emulate living
 systems.  It an issue that AI advocates intuitively
 and scrupulously AVOID.
 
 Pain in living systems isn't just a self-sensor
 of proper/improper code functioning, it is an embedded
 registration of viable/disrupted matrix state.
 
 And that is something that no current human contrived 
 system monitors as a CONCURRENT property of software.
 
 For example, we might say that central processors
 regularly 'display pain' .. that we designers/users
 recognize as excess heat .. that burn out mother boards.
 The equipment 'runs a high fever', in other words.
 
 But where living systems are multiple functioning systems
 and have internal ways of guaging and reacting locally and 
 biochemically vis a vis both to the variance and retaining
 sufficient good-operations while bleeding off 'fever',
 hardware systems have no capacity to morph or adapt
 itself structurally and so keep on burning up or wait
 for external aware-structures to command them to stop
 operating for a while and let the equipment cool down.
 
 I maintain that living systems are significantly designed where
 hardware IS software, and so have a capacity for local
 adaptive self-sensitivity, that human 'contrived' HW/SW systems
 don't and mostly .. can't.
 
 Jamie Rose 
 
 
 Stathis Papaioannou wrote:
  
  No responses yet to this question. It seems to me a straightforward
  consequence of computationalism that we should be able to write a program
  which, when run, will experience pain, and I suspect that this would be a
  substantially simpler program than one demonstrating general intelligence. 
  It
  would be very easy to program a computer or build a robot that would behave
  just like a living organism in pain, but I'm not sure that this is nearly 
  enough to
  ensure that it is in fact experiencing pain. Any ideas, or references to 
  sources
  that have considered the problem?
  
  Stathis Papaioannou
 
 
  

_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: computer pain

2006-12-13 Thread Stathis Papaioannou


Brent Meeker writes:

 I would say that many complex mechanical systems react to pain in a way 
 similar to simple animals.  For example, aircraft have automatic shut downs 
 and fire extinguishers.  They can change the flight controls to reduce stress 
 on structures.  Whether they feel this pain is a different question.  I 
 think they feel it if they incorporate it into a narrative to which values 
 are attached for purposes of learning (Don't do that again, it hurts.).  
 But that's my theory of qualia - a speculative one.

Pain mostly comes before learning. Infants are born with the 
ability to experience pain, so they learn to avoid activities which 
cause pain. It seems to be hardwired at a very basic level, which 
makes me think that it ought to be easier to implement in an AI than 
more complex cognitive processes and behaviours. But how would 
a behaviour such as an aircraft's reaction to a fire on board be 
characterised as painful in the way an infant putting its hand in a 
flame is painful? If the aircraft's experience is not painful, what can 
do to make it more like the baby's?

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-13 Thread 1Z


Stathis Papaioannou wrote:
 Bruno Marchal writes:
 
  Le 12-déc.-06, à 11:16, Stathis Papaioannou a écrit :
 
  
  
   Bruno Marchal writes (quoting Tom Caylor):
  
   In my view, your motivation is not large enough.  I am also motivated
   by a problem: the problem of evil.  I don't think the real problem of
   evil is solved or even really addressed with comp.  This is because
   comp cannot define evil correctly.  I will try to explain this more.
  
  
   I agree that the problem of evil (and thus the equivalent problem of
   Good) is interesting. Of course it is not well addressed by the two
   current theories of everything: Loop gravity and String theory. With
   that respect the comp hyp can at least shed some light on it, and of
   course those light are of the platonic-plotinus type where the
   notion
   of goodness necessitates the notion of truth to begin with. I say more
   below.
  
   Surely you have to aknowledge that there is a fundamental difference
   between matters of fact and matters of value.
 
 
  Yes. Sure. And although I think that science is a value by itself, I am
  not sure any scientific proposition can be used in judging those value.
  But then, I also believe that this last sentence can be proved in comp
  theories.
 
 
 
   Science can tell us how to
   make a nuclear bomb and the effects a nuclear explosion will have on
   people
   and the environment, but whether it is good or bad to use such a
   weapon
   is not an empirical question at all.
 
 
  Hmmm. This is not entirely true. We can test pain killer on people,
  and we can see in scientific publication statements like the drugs X
  seem to provide help to patient suffering from disease Y.
  Then it can be said that dropping a nuclear bomb on a city is bad for
  such or such reason, and that it can be good in preventing bigger use
  of nuclear weapon, etc. Again, we don't have too define good and bad
  for reasoning about it once we agree on some primitive proposition
  (that being rich and healthy is better than being poor and sick for
  example).

 OK, but the point is that the basic definition of bad is arbitrary.

That isn't just true

  It might seem
 that there would be some consensus, for example that torturing innocent people
 is an example of bad, but it is possible to assert without fear of logical 
 or
 empirical contradiction that torturing innocent people is good.

People don't want to be tortured. Isn't that empirical proof?

 There are people
 in the world who do in fact think there is nothing wrong with torture and 
 although
 they are not very nice peopel, they are not as a result of having such a 
 belief deluded.

I think they are. Can you prove they are not?

  Recall that even the (although very familiar) notion of natural numbers
  or integers cannot be defined unambiguously in science. Science asks us
  only to be clear on primitive principles so that we can share some
  reasoning on those undefinable entities.

 But there is a big difference between Pythagoras saying 17 is prime and 
 Pythagoras
 saying that eating beans is bad. You can't say that prime and bad are 
 equivalent
 in that they both need to be axiomatically defined.

Badness can be axiomatically defined (treating people as means rather
than ends,
acting on a maxim you would not wish to be universal law, not
doing as you would be done by, causaing unnecessary suffering).

   You could say that I believe blowing people up is bad is a statement
   of
   empirical fact, either true or false depending on whether you are
   accurately
   reporting your belief. However, blowing people up is bad is a
   completely
   different kind of statement which no amount of empirical evidence has
   any
   bearing on.
 
 
 
  It really depends on the axioms of your theory. A theory of good and
  bad for a lobian machine can be based on the idea of 3-surviving or
  1-surviving, etc. And then we can reason.
  Now I do agree with you that good and bad can probably not be defined
  intrinsically in a mathematical way. But a richer lobian machine can
  define some notion of self-referential correctness for a less rich
  lobian machine and then reason about it, and then lift the result in
  some interrogative way about herself.
  Some suicide phenomenon with animals could be explained in such a way.
  You have the Parfit book reason and persons. There are many pieces of
  valid reasoning (and non normative) on ethical points in that book.
  Science can handle values and relation between values as far as it does
  not judge normatively those values.
 
   If you survey a million people and all of them believe that blowing
   up people is bad, you have shown that most people believe that
   blowing up
   people is bad, but you have not shown that blowing up people is bad.
 
 
  Again this depends on your theory. If you have the naive theory that if
  a majority thinks that X is bad for them, then X is bad in the context
  of that majority, then this 

Re: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-13 Thread Bruno Marchal


Le 13-déc.-06, à 02:01, Stathis Papaioannou a écrit :

 OK, but the point is that the basic definition of bad is arbitrary.


Perhaps, but honestly I am not sure. In acomp, we can define a (very 
platonist) notion of bad. The simpler and stronger one is just the 
falsity f. Then Bf, BBf, BBBf, f, Bf, etc. gives a sequence 
of less and less badness, which translated in the Z (material) 
hypostases gives the Df, DDf, DDDf, f, Df ... which are better 
candidates for that notion of badness.
(recall that G does not prove Bf - f, and that G* proves DBf (the 
astonishing godelian consistency of being inconsistent).
(note also that G* *does* prove Bf - f).


 It might seem
 that there would be some consensus, for example that torturing 
 innocent people
 is an example of bad, but it is possible to assert without fear of 
 logical or
 empirical contradiction that torturing innocent people is good.

I disagree. Mainly for the reason alluded above. Please note I 
understand that there is no purely logical contradiction (f) in 
asserting that torture is good, but the purely logical operates at 
the third person level, in which there is no pain at all. Once you 
take incompleteness into account this should be much less evident, and 
much more fuzzy. There is nothing illogical with an altimeter (in a 
plane) giving a wrong information (like the plane is at altitude = 
1000, instead of the correct 500), but you can understand this can lead 
to a catastrophe. Any BB...Bf can be seen as a promise for a 
catastrophe.


 There are people
 in the world who do in fact think there is nothing wrong with torture 
 and although
 they are not very nice peopel, they are not as a result of having such 
 a belief deluded.


Honestly I doubt it. Of course some people can believe that torture can 
be good for their own life, in case torture can prevent the enemy to 
drop some bomb. Of course some people are cynical and can, like Sade, 
defend torture with the (wrong imo) idea that nature defends the 
right of those who have the powers and thus that they have the right to 
follow their sexual perverse compulsion, but this could mean that they 
are inconsistent (they have some BBB...Bf as implicit belief). Then 
from the divine (starred G*) pov, they are (globally) inconsistent 
(although cannot know it).





 Recall that even the (although very familiar) notion of natural 
 numbers
 or integers cannot be defined unambiguously in science. Science asks 
 us
 only to be clear on primitive principles so that we can share some
 reasoning on those undefinable entities.

 But there is a big difference between Pythagoras saying 17 is prime 
 and Pythagoras
 saying that eating beans is bad. You can't say that prime and bad 
 are equivalent
 in that they both need to be axiomatically defined.


Hmmm... prime and bad cannot be equivalent in that sense. But 
being a natural number and bad can. The nuance is that I grant the 
notion of natural number before defining prime. But my belief in 
natural numbers (my belief in the standard model of Peano Arithmetic, 
Arithmetical truth) is as hard, even impossible, to define than is the 
notion of truth, good, etc.
Defining Prime is easy: (~(x = 1)  Ay(y divides x) - (y = 1 v y = 
x)) where (a divides b) is a macro for Ez(az = b).
Defining number is just not possible actually. Even with a richer 
theory or second order logic you will have to rely implicitly on the 
standard model of the higher theory, which is less palatable than the 
standard model of PA.



 The problem is that some people think good and bad are on a par 
 with
 descriptive terms that every sentient species, regardless of their 
 psychology,
 could agree on. They are not.


Not in any normative sense. But once we bet on a theory (like comp), 
then we get mathematical tools which can provide general explanation of 
what is bad, and also explain why such definition cannot be normative, 
making the bad/good distinctions an ideal goal for complex sufficiently 
self-sustaining machines societies.




 Every sentient species would agree that a
 nuclear bomb going off in your face will kill you,


Bad example for this list!  (CF quantum immortality or comp 
immortality!). But ok, this is besides the point.



 but some would say this was
 good and others would say it was bad.

Yes, but unless people are insane, most will give or try to give a 
ratio. In such case it is a question of utility with respect of some 
notion of good and bad. It is not related with the hardness to define 
completely what is good and what is bad. Like killing. Killing can be 
considered as bad but can be accepted in self-defense.



 I think a message spelt out across the sky by stars simultaneously 
 going
 nova would probably do it for me.

I would bet I'm dreaming instead ... :)


 I would at least believe that these were
 beings with godlike powers responsible, but would reserve judgement on
 whether they had also created the universe. Can't 

Re: Hello all - My Theory of Everything

2006-12-13 Thread Bruno Marchal


Le 13-déc.-06, à 02:45, Russell Standish a écrit :

 Essentially that is the Occam razor theorem. Simpler universes have
 higher probability.


In the ASSA(*) realm I can give sense to this. I think Hal Finney and 
Wei Dai have defended something like this. But in the comp RSSA(**) 
realm, strictly speaking even the notion of one universe (even 
considered among other universes or in a multiverse à-la Deutsch) does 
not make sense unless the comp substitution level is *very* low. Stable 
appearances of local worlds emerge from *all* computations making all 
apparent (and thus sufficiently complex) world not turing emulable. 
Recall that I am a machine entails the apparent universe cannot be a 
machine (= cannot be turing-emulable  (cf UDA(***)).

Bruno

For the new people I recall the acronym:
(*) ASSA = absolute self-sampling assumption
(**) RSSA = relative self-sampling assumption
The SSA idea is in the ASSA realm comes from Nick Bostrom, if I 
remember correctly.
(***) UDA: see for example 
http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHAL.htm

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-13 Thread 1Z


Bruno Marchal wrote:

  It might seem
  that there would be some consensus, for example that torturing
  innocent people
  is an example of bad, but it is possible to assert without fear of
  logical or
  empirical contradiction that torturing innocent people is good.

 I disagree. Mainly for the reason alluded above. Please note I
 understand that there is no purely logical contradiction (f) in
 asserting that torture is good, but the purely logical operates at
 the third person level, in which there is no pain at all. Once you
 take incompleteness into account this should be much less evident, and
 much more fuzzy. There is nothing illogical with an altimeter (in a
 plane) giving a wrong information (like the plane is at altitude =
 1000, instead of the correct 500), but you can understand this can lead
 to a catastrophe.


Assuming catastrophes are bad. But that hardly show that
falsehood and evil are identical, or even co-extensive.
There can be good falsehood (comforting illusions) and
ills that have nothing to do with falsehood.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-13 Thread James N Rose

Stathis,

As I was reading your comments this morning, an example
crossed my mind that might fit your description of in-place
code lines that monitor 'disfunction' and exist in-situ as
a 'pain' alert .. that would be error evaluating 'check-sum'
computations.

In a functional way, parallel check-summing, emulates at least
the first part of an 'experience pain' rete .. the initial
establishment of a signal message 'something is wrong'.

What I'm getting at is that the question could be 
approached best by not retrofitting to 'experiential
qualia' .. where we don't have a reasonable way to
specify the different systems' 'experiencing', but we
do have a way to identify analogs of process.

For example - its possible to identify in architecture
and materiales where 'points of highest stress' occur.

The physicality of structures may indeed internally be
experiencing higher-pressure nodes as 'pain' - where
the only lack in the chain of our interaction with 
'inanimate' structures, is OUR lack-of -wisdom in 
recognizing that those stress point are in fact 
'pain-points' for those kinds of systems.

For living systems, the nature of the neural connections is 
that the communication lines are still raw and open - back
to the locus of the problem (pain site).  In non-living
structures, any break or disruption totally shuts down
the back-reporting -- 'pain' disappears when all communication
'about' the pain-source is taken away or simply breaks down.

Jamie


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-13 Thread Brent Meeker

Stathis Papaioannou wrote:
 
 Brent Meeker writes:
 
 I would say that many complex mechanical systems react to pain in a way 
 similar to simple animals.  For example, aircraft have automatic shut downs 
 and fire extinguishers.  They can change the flight controls to reduce 
 stress on structures.  Whether they feel this pain is a different 
 question.  I think they feel it if they incorporate it into a narrative to 
 which values are attached for purposes of learning (Don't do that again, it 
 hurts.).  But that's my theory of qualia - a speculative one.
 
 Pain mostly comes before learning. Infants are born with the 
 ability to experience pain, so they learn to avoid activities which 
 cause pain. 

But the learning is a higher level thing.  The experience has two levels.  One 
is just hardwired reactions, pulling your hand back from the fire.  The 
aircraft already has this, as to some very simple organisms.  The other is part 
of consciousness, which I speculate is creating a narrative in memory with 
attached emotional values.  Babies certainly feel pain in the first sense, but 
they seem to have to learn to cry when hurt.  I've accidentally stuck one of my 
infant children when diapering them and gotten no reaction.

It seems to be hardwired at a very basic level, which 
 makes me think that it ought to be easier to implement in an AI than 
 more complex cognitive processes and behaviours. But how would 
 a behaviour such as an aircraft's reaction to a fire on board be 
 characterised as painful in the way an infant putting its hand in a 
 flame is painful? If the aircraft's experience is not painful, what can 
 do to make it more like the baby's?

Add the narrative memory with values attached and then the ability to review 
that memory when contemplating future actions.

Brent Meeker



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Evil ? (was: Hypostases

2006-12-13 Thread Brent Meeker

1Z wrote:
 
 Stathis Papaioannou wrote:
 Bruno Marchal writes:
 Le 12-déc.-06, à 11:16, Stathis Papaioannou a écrit :


 Bruno Marchal writes (quoting Tom Caylor):

 In my view, your motivation is not large enough.  I am also motivated
 by a problem: the problem of evil.  I don't think the real problem of
 evil is solved or even really addressed with comp.  This is because
 comp cannot define evil correctly.  I will try to explain this more.

 I agree that the problem of evil (and thus the equivalent problem of
 Good) is interesting. Of course it is not well addressed by the two
 current theories of everything: Loop gravity and String theory. With
 that respect the comp hyp can at least shed some light on it, and of
 course those light are of the platonic-plotinus type where the
 notion
 of goodness necessitates the notion of truth to begin with. I say more
 below.
 Surely you have to aknowledge that there is a fundamental difference
 between matters of fact and matters of value.

 Yes. Sure. And although I think that science is a value by itself, I am
 not sure any scientific proposition can be used in judging those value.
 But then, I also believe that this last sentence can be proved in comp
 theories.



 Science can tell us how to
 make a nuclear bomb and the effects a nuclear explosion will have on
 people
 and the environment, but whether it is good or bad to use such a
 weapon
 is not an empirical question at all.

 Hmmm. This is not entirely true. We can test pain killer on people,
 and we can see in scientific publication statements like the drugs X
 seem to provide help to patient suffering from disease Y.
 Then it can be said that dropping a nuclear bomb on a city is bad for
 such or such reason, and that it can be good in preventing bigger use
 of nuclear weapon, etc. Again, we don't have too define good and bad
 for reasoning about it once we agree on some primitive proposition
 (that being rich and healthy is better than being poor and sick for
 example).
 OK, but the point is that the basic definition of bad is arbitrary.
 
 That isn't just true
 
  It might seem
 that there would be some consensus, for example that torturing innocent 
 people
 is an example of bad, but it is possible to assert without fear of logical 
 or
 empirical contradiction that torturing innocent people is good.
 
 People don't want to be tortured. Isn't that empirical proof?
 
 There are people
 in the world who do in fact think there is nothing wrong with torture and 
 although
 they are not very nice peopel, they are not as a result of having such a 
 belief deluded.
 
 I think they are. Can you prove they are not?
 
 Recall that even the (although very familiar) notion of natural numbers
 or integers cannot be defined unambiguously in science. Science asks us
 only to be clear on primitive principles so that we can share some
 reasoning on those undefinable entities.
 But there is a big difference between Pythagoras saying 17 is prime and 
 Pythagoras
 saying that eating beans is bad. You can't say that prime and bad are 
 equivalent
 in that they both need to be axiomatically defined.
 
 Badness can be axiomatically defined (treating people as means rather
 than ends,
 acting on a maxim you would not wish to be universal law, not
 doing as you would be done by, causaing unnecessary suffering).

But such a definition doesn't make it so.

I think discussions of good and evil go astray because they implicitly assume 
there is some objective good and evil.  In fact all values are personal, only 
individuals experience suffering and joy.  Rules such as Kant's (which by the 
way says you shouldn't treat people *only* as ends) are attempts to derive 
social, ethical rules that provide for the realization of individual values.  
But individuals differ and so ethical rules always have exceptions in practice. 
 Everybody can agree that *their* suffering is bad; but that doesn't show that 
making other people suffer is bad - it is necessary for society to be able to 
punish people.

Brent Meeker


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: Evil ? (was: Hypostases

2006-12-13 Thread 1Z


Brent Meeker wrote:
 1Z wrote:
 
  Stathis Papaioannou wrote:
  Bruno Marchal writes:
  Le 12-déc.-06, à 11:16, Stathis Papaioannou a écrit :
 
 
  Bruno Marchal writes (quoting Tom Caylor):
 
  In my view, your motivation is not large enough.  I am also motivated
  by a problem: the problem of evil.  I don't think the real problem of
  evil is solved or even really addressed with comp.  This is because
  comp cannot define evil correctly.  I will try to explain this more.
 
  I agree that the problem of evil (and thus the equivalent problem of
  Good) is interesting. Of course it is not well addressed by the two
  current theories of everything: Loop gravity and String theory. With
  that respect the comp hyp can at least shed some light on it, and of
  course those light are of the platonic-plotinus type where the
  notion
  of goodness necessitates the notion of truth to begin with. I say more
  below.
  Surely you have to aknowledge that there is a fundamental difference
  between matters of fact and matters of value.
 
  Yes. Sure. And although I think that science is a value by itself, I am
  not sure any scientific proposition can be used in judging those value.
  But then, I also believe that this last sentence can be proved in comp
  theories.
 
 
 
  Science can tell us how to
  make a nuclear bomb and the effects a nuclear explosion will have on
  people
  and the environment, but whether it is good or bad to use such a
  weapon
  is not an empirical question at all.
 
  Hmmm. This is not entirely true. We can test pain killer on people,
  and we can see in scientific publication statements like the drugs X
  seem to provide help to patient suffering from disease Y.
  Then it can be said that dropping a nuclear bomb on a city is bad for
  such or such reason, and that it can be good in preventing bigger use
  of nuclear weapon, etc. Again, we don't have too define good and bad
  for reasoning about it once we agree on some primitive proposition
  (that being rich and healthy is better than being poor and sick for
  example).
  OK, but the point is that the basic definition of bad is arbitrary.
 
  That isn't just true
 
   It might seem
  that there would be some consensus, for example that torturing innocent 
  people
  is an example of bad, but it is possible to assert without fear of 
  logical or
  empirical contradiction that torturing innocent people is good.
 
  People don't want to be tortured. Isn't that empirical proof?
 
  There are people
  in the world who do in fact think there is nothing wrong with torture and 
  although
  they are not very nice peopel, they are not as a result of having such a 
  belief deluded.
 
  I think they are. Can you prove they are not?
 
  Recall that even the (although very familiar) notion of natural numbers
  or integers cannot be defined unambiguously in science. Science asks us
  only to be clear on primitive principles so that we can share some
  reasoning on those undefinable entities.
  But there is a big difference between Pythagoras saying 17 is prime and 
  Pythagoras
  saying that eating beans is bad. You can't say that prime and bad are 
  equivalent
  in that they both need to be axiomatically defined.
 
  Badness can be axiomatically defined (treating people as means rather
  than ends,
  acting on a maxim you would not wish to be universal law, not
  doing as you would be done by, causaing unnecessary suffering).

 But such a definition doesn't make it so.

 I think discussions of good and evil go astray because they implicitly assume 
 there is some objective good and evil.  In fact all values are personal, only 
 individuals experience suffering and joy.

Only individuals can add numbers up, that doesn't make maths
subjective.

  Rules such as Kant's (which by the way says you shouldn't treat people 
 *only* as ends) are attempts to derive social, ethical rules that provide for 
 the realization of individual values.

Kant's is explicitly  more than that.

  But individuals differ and so ethical rules always have exceptions in 
 practice.

All that means is that you can't have rules along the lines
of don't tie anyone up and spank them since some people
enjoy it. It doesn't stop you having more abstract rules. Like
Kant's.

  Everybody can agree that *their* suffering is bad; but that doesn't show 
 that making other people suffer is bad
 - it is necessary for society to be able to punish people.

X is bad doesn't mean you shouldn't do it under any
circumstances. The alternative -- in this case letting criminals
go unpunished -- might be worse.


 Brent Meeker


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

Re: Evil ?

2006-12-13 Thread Brent Meeker

1Z wrote:
 
 Brent Meeker wrote:
 1Z wrote:
 Stathis Papaioannou wrote:
 Bruno Marchal writes:
 Le 12-déc.-06, à 11:16, Stathis Papaioannou a écrit :

 Bruno Marchal writes (quoting Tom Caylor):

 In my view, your motivation is not large enough.  I am also motivated
 by a problem: the problem of evil.  I don't think the real problem of
 evil is solved or even really addressed with comp.  This is because
 comp cannot define evil correctly.  I will try to explain this more.
 I agree that the problem of evil (and thus the equivalent problem of
 Good) is interesting. Of course it is not well addressed by the two
 current theories of everything: Loop gravity and String theory. With
 that respect the comp hyp can at least shed some light on it, and of
 course those light are of the platonic-plotinus type where the
 notion
 of goodness necessitates the notion of truth to begin with. I say more
 below.
 Surely you have to aknowledge that there is a fundamental difference
 between matters of fact and matters of value.
 Yes. Sure. And although I think that science is a value by itself, I am
 not sure any scientific proposition can be used in judging those value.
 But then, I also believe that this last sentence can be proved in comp
 theories.



 Science can tell us how to
 make a nuclear bomb and the effects a nuclear explosion will have on
 people
 and the environment, but whether it is good or bad to use such a
 weapon
 is not an empirical question at all.
 Hmmm. This is not entirely true. We can test pain killer on people,
 and we can see in scientific publication statements like the drugs X
 seem to provide help to patient suffering from disease Y.
 Then it can be said that dropping a nuclear bomb on a city is bad for
 such or such reason, and that it can be good in preventing bigger use
 of nuclear weapon, etc. Again, we don't have too define good and bad
 for reasoning about it once we agree on some primitive proposition
 (that being rich and healthy is better than being poor and sick for
 example).
 OK, but the point is that the basic definition of bad is arbitrary.
 That isn't just true

  It might seem
 that there would be some consensus, for example that torturing innocent 
 people
 is an example of bad, but it is possible to assert without fear of 
 logical or
 empirical contradiction that torturing innocent people is good.
 People don't want to be tortured. Isn't that empirical proof?

 There are people
 in the world who do in fact think there is nothing wrong with torture and 
 although
 they are not very nice peopel, they are not as a result of having such a 
 belief deluded.
 I think they are. Can you prove they are not?

 Recall that even the (although very familiar) notion of natural numbers
 or integers cannot be defined unambiguously in science. Science asks us
 only to be clear on primitive principles so that we can share some
 reasoning on those undefinable entities.
 But there is a big difference between Pythagoras saying 17 is prime and 
 Pythagoras
 saying that eating beans is bad. You can't say that prime and bad are 
 equivalent
 in that they both need to be axiomatically defined.
 Badness can be axiomatically defined (treating people as means rather
 than ends,
 acting on a maxim you would not wish to be universal law, not
 doing as you would be done by, causaing unnecessary suffering).
 But such a definition doesn't make it so.

 I think discussions of good and evil go astray because they implicitly 
 assume there is some objective good and evil.  In fact all values are 
 personal, only individuals experience suffering and joy.
 
 Only individuals can add numbers up, that doesn't make maths
 subjective.

That depends on how you mean subjective.  Math is objective in the sense that 
everybody agrees on it.  But it's subjective in the sense that it depends on 
minds (subjects).  Good and evil are not even objective in the sense of 
universal agreement, except possibly in the self-referential form such as, My 
suffering is bad.  So I think concepts of good and evil need to be built on 
the more fundamental personal vales.

 
  Rules such as Kant's (which by the way says you shouldn't treat people 
 *only* as ends) are attempts to derive social, ethical rules that provide 
 for the realization of individual values.
 
 Kant's is explicitly  more than that.

Sure.  I was just correcting the common misquote.

 
  But individuals differ and so ethical rules always have exceptions in 
 practice.
 
 All that means is that you can't have rules along the lines
 of don't tie anyone up and spank them since some people
 enjoy it. It doesn't stop you having more abstract rules. Like
 Kant's.

But the problem is justifying the rules.  For example there is a rule here that 
it is wrong to drive your car more than 70mph. It's a rule balancing risk of 
accident against time spent traveling.  Yet more than 80% of the people break 
this rule.  Their personal balance of risk and time is different.

 
  

RE: computer pain

2006-12-13 Thread Colin Geoffrey Hales

Hi Stathis/Jamie et al.
I've been busy else where in self-preservation mode deleting emails
madly .frustrating, with so many threads left hanging...oh well...but
I couldn't go past this particular dialog.

I am having trouble that you actually believe the below to be the case!
Lines of code that experience pain? Upon what law of physics is that
based?

Which one hurts more:

if (INPUT A) = '1' then {
   OUPUT OUCH!
}
or
if (INPUT A) = '1' then {
   OUPUT OUCH!OUCH!OUCH!OUCH!
}
or
if (INPUT A) = '10' then {
   OUPUT OUCH!OUCH!OUCH!OUCH!
}

Also: In a distributed applicationIf I put the program on earth, the
input on Mars and the CPU on the moon, which bit actually does the hurting
and when? It's still a program, still running - functionally the same
(time delays I know - not quite the same... but you get the idea).

The idea is predicated on the proven non-existance of a physical mechanism
for experience - that it somehow equates with manipulation of abstract
symbols as information rather than the fabric of reality as information. -
That pretending to be a neuron necessarily results in everything that a
neuron participates in as a chunk of matter.

It also completely ignores the ROLE of the experiences. There's a reason
for them. Unless you know the role you cannot assume that the software
model will inherit that role. With no role why bother with it? I don't
have to put OUCH!OUCH!OUCH! in the above.

What you are talking about is 'strong-AI' --- its functionalist
assumptions need to be carefully considered.

Another issue: If a life-like artefact visibly behaves like it is in agony
the only thing actually getting hurt are the humans watching it, who have
real experiences and empathy based on real qualia. It might be OK if it
were play. But otherwise? h.

cheers,

colin



 Jamie,

 I basically agree with your appraisal of the differences
 between living brains and digital computers. However, it
 should be possible for a general purpose computer to
 emulate the behaviour of a biological system in
 software. After all, biological systems are just
 comprised of matter following the laws of
 physics, which are well understood and deterministic
 at the size scales of interest.

 When it comes to neural tissue, the emulation should be
 able to replace the original provided that it is run on
 sufficiently fast hardware and has appropriate
 interfaces for input and output.

 While it would be extremely difficult to emulate a
 particular human brain (as in mind uploading), it should
 be easier to emulate a simplified generic brain, and easier
 again to emulate a single simplified perceptual function,
 such as pain.This means that it should be possible to store
 on a hard disk lines of code which, when
 run on a PC, will result in the program experiencing pain;
 perhaps excruciating pain beyond what
 humans can imagine, if certain parameters in the program
 are appropriately chosen. What might a simple example of
 such code look like? Should we try to determine what
 the painful programs are as a matter of urgency,
 in order to avoid using them in
 subroutines in other programs?

 Stathis Papaioannou








--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---