Re: [agi] Pure reason is a disease.

2007-06-20 Thread YKY (Yan King Yin)

On 6/19/07, Eric Baum [EMAIL PROTECTED] wrote:


The modern feature is that whole peoples have chosen to reproduce at
half replacement level. In case you haven't thought about the
implications of that, that means their genes, for example, are
vanishing from the pool by a factor of 2 every 20 years or so.
Won't take long before they are gone.
I don't doubt there is a good element of
K-strategy in human makeup, but evidently
the K-strategy programming is a bit out of whack.

I expect this was caused by our mental programming advancing much
faster, at the cultural, meme, etc level, in the presence of
language and printing presses etc, than  evolution of the
genome could keep up with.
My guess is evolution will catch up and correct this in a
generation or two, but in the meantime it is going to have
substantial demographic effects.


Evolution of the genome will catch up is a very curious notion.  Can you
elaborate on this?

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-20 Thread Eric Baum

YKY On 6/19/07, Eric Baum [EMAIL PROTECTED] wrote:
 The modern feature is that whole peoples have chosen to reproduce
 at half replacement level. In case you haven't thought about the
 implications of that, that means their genes, for example, are
 vanishing from the pool by a factor of 2 every 20 years or so.
 Won't take long before they are gone.  I don't doubt there is a
 good element of K-strategy in human makeup, but evidently the
 K-strategy programming is a bit out of whack.
 
 I expect this was caused by our mental programming advancing much
 faster, at the cultural, meme, etc level, in the presence of
 language and printing presses etc, than evolution of the genome
 could keep up with.  My guess is evolution will catch up and
 correct this in a generation or two, but in the meantime it is
 going to have substantial demographic effects.

YKY Evolution of the genome will catch up is a very curious notion.
YKY Can you elaborate on this?

The people who are motivated to have more babies will have more babies
than the people not so motivated. The genes causing them to want to
have more babies will increase in frequency in the gene pool.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-19 Thread Eric Baum

Charles N.B.: People have practiced birth control as far back as we
Charles have information.  Look into the story of Oedipus Rex.  Study
Charles the histories of the Polynesians.  The only modern feature is
Charles that we are now allowing the practice to occur before the
Charles investment in pregnancy.

The modern feature is that whole peoples have chosen to reproduce at
half replacement level. In case you haven't thought about the
implications of that, that means their genes, for example, are
vanishing from the pool by a factor of 2 every 20 years or so.
Won't take long before they are gone.
I don't doubt there is a good element of 
K-strategy in human makeup, but evidently
the K-strategy programming is a bit out of whack.

I expect this was caused by our mental programming advancing much
faster, at the cultural, meme, etc level, in the presence of 
language and printing presses etc, than  evolution of the 
genome could keep up with.
My guess is evolution will catch up and correct this in a
generation or two, but in the meantime it is going to have 
substantial demographic effects.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-18 Thread Eric Baum

Eric Baum wrote:
 ...  I claim that it is the very fact that you are making decisions
 about whether to supress pain for higher goals that is the reason
 you are conscious of pain. Your consciousness is the computation of
 a top-level decision making module (or perhaps system). If you were
 not making decisions waying (nuanced) pain against higher goals,
 you would not be conscious of the pain.
 

Charles Consider a terminal cancer patient.  It's not the actual
Charles weighing that causes consciousness of pain, it's the
Charles implementation which normally allows such weighing.  This, in
Charles my opinion, *is* a design flaw.  Your original statement is a
Charles more useful implementation.  When it's impossible to do
Charles anything about the pain, one *should* be able to turn it
Charles off.  Unfortunately, this was not evolved.  After all, you
Charles might be wrong about not being able to do anything about it,
Charles so we evolved such that pain beyond a certain point cannot be
Charles ignored.  (Possibly some with advanced training and several
Charles years devoted to the mastery of sensation [e.g. yoga
Charles practitioners] may be able to ignore such pain.  I'm not
Charles convinced, and would consider experiments to obtain proof to
Charles be unethical.  And, in any case, they don't argue against my
Charles point.)

I agree it is running the program the way it is written, not
specifically the fact that you are weighing it. Sorry if the above was
confusing. What I meant was that, it's computations this decision
making module is programmed to be able to report and weigh that you
are conscious of. Those unimportant for decision making, or consigned
for whatever reason below an abstraction boundary, are not conscious.

Evolution does not produce optimal programs, only very good ones.
Also the optimal solution for a complex problem will not on most
complex problems do what might be thought the optimal thing on 
every instance. A simple example is the max flow problem, in which the
optimal flow will usually not utilize the allowed flow along many edges.

Evolution is probably above your ethics, but obviously if you could
turn off pain, you would likely behave in ways that are less fit from 
evolution's point of view than the program it gave you. Recently people
have discovered how to turn off pregnancy, and until evolution catches
up, they have been widely doing things that are likely less fit than
if they hadn't been able to turn off pregnancy.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-17 Thread Eric Baum

Josh On Saturday 16 June 2007 07:20:27 pm Matt Mahoney wrote:
 --- Bo Morgan [EMAIL PROTECTED] wrote:
 
  
  I haven't kept up with this thread.  But I wanted to counter the
 idea of a  simple ordering of painfulness.
Josh 
 Can you give me an example?
 

Josh Anyone who has played a competitive sport can tell you that
Josh there are lots of different kinds of pain, and that some are
Josh good and some are bad, and some are just obnoxious but to be
Josh overcome. You can't succeed at any level without being able to
Josh supress pain for higher goals, but you won't last long if you
Josh ignore the wrong kind.

I claim that it is the very fact that you are making decisions about
whether to supress pain for higher goals that is the reason you are
conscious of pain. Your consciousness is the computation of a
top-level decision making module (or perhaps system). If you were not
making decisions waying (nuanced) pain against higher goals, 
you would not be conscious of the pain.

Josh Even a simplistic modular model of mind can allow for pain
Josh signals to the various modules which can be different in kind
Josh depending on which module they are reporting to.

Josh Josh

Josh - This list is sponsored by AGIRI:
Josh http://www.agiri.org/email To unsubscribe or change your
Josh options, please go to:
Josh http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-17 Thread Mike Tintner

Eric: I claim that it is the very fact that you are making decisions about
whether to supress pain for higher goals that is the reason you are
conscious of pain. Your consciousness is the computation of a
top-level decision making module (or perhaps system). If you were not
making decisions waying (nuanced) pain against higher goals,
you would not be conscious of the pain.

Sure, emotions are designed to pressure the conscious self. But that whole 
setup makes no sense at all, if the conscious self is merely the execution 
of a deterministic program.  It's a) unnecessary - deterministically 
programmed computers work perfectly well without having a conscious, 
executive self, and b) it's sadistic in the extreme, torturing and punishing 
a self which has supposedly gotta do what it's gotta do anyway. It's quite 
bizarre in fact.


Hence Fodor:

It's been increasingly clear, since Freud, that psychological processes of 
great complexity can be unconscious. The question then arises: what does 
consciousness add to what unconsciousness can achieve? To put it another 
way, what mental processes are there that can be performed only because the 
mind is conscious, and what does consciousness contribute to their 
performance? Nobody has an answer to this question for any mental process 
whatever. As far as anybody knows, anything that our conscious minds do, 
they could do just as well if they were unconscious. Why then did God bother 
to make consciousness. What on earth could he have had in mind?
Jerry Fodor, article, You can't argue with a novel, London Review of Books, 
4.3.2004


On the other hand, if the self is nondeterministically programmed, then 
everything makes sense. Then the system needs to pressure a continually 
wayward self, that keeps getting carried away on particular tasks , 
reminding it with emotions of the other goals and tasks it's ignoring. Back 
to work. Back to sleep. Or back to sex.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-17 Thread Eric Baum

I would claim that the specific nature of any quale, such as the
various nuanced pain sensations, depends (in fact, is the same thing
as) the code being run/ computation being performed when the quale 
is perceived. I therefor don't
find it at all surprising that insects perceive pain differently, in
what might be called a diminished manner, from the way we do. 
Also human consciousness can be diminished, for example by drugs
that interfere with the usual computations.

Jiri Eric, I'm not 100% sure if someone/something else than me feels
Jiri pain, but considerable similarities between my and other humans

Jiri - architecture - [triggers of] internal and external pain
Jiri related responses - independent descriptions of subjective pain
Jiri perceptions which correspond in certain ways with the internal
Jiri body responses

Jiri make me think it's more likely than not that other humans feel
Jiri pain the way I do.  The further you move from human like
Jiri architecture the less you see the signs of pain related behavior
Jiri (e.g. the avoidance behavior). Insect keeps trying to use badly
Jiri injured body parts the same way as if they weren't injured and
Jiri (unlike in mammals) its internal responses to the injury don't
Jiri suggest that anything crazy is going on with them. And when I
Jiri look at software, I cannot find a good reason for believing it
Jiri can be in pain. The fact that we can use pain killers (and other
Jiri techniques) to get rid of pain and still remain complex systems
Jiri capable of general problem solving suggests that the pain quale
Jiri takes more than complex problem solving algorithms we are
Jiri writing for our AGI.

Jiri Regards, Jiri

Jiri - This list is sponsored by AGIRI:
Jiri http://www.agiri.org/email To unsubscribe or change your
Jiri options, please go to:
Jiri http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-17 Thread Eric Baum

The difference between nondeterministic computation and deterministic
computation is a source of random numbers. Its a deep question in CS theory
whether this makes any difference-- or whether you can simulate a
nondeterministic computation using a pseudorandom number
generator. The difference is very subtle though, and of extremely
dubious importance to modeling thought. The difference is whether some
algorithm will have different worst case properties-- using an
appropriate pseudo random number generator would almost always be just
as good, but might not be as good in very rare worst case
situations. Its hard to see how this is important for thought.

I have no fundamental problem with the brain being a
non-deterministic computer, accessing true quantum random bits. I
don't believe it works that way, that the physics suggests this or the
CS suggests it would make a difference, but I'm open to the idea.
However this is explicitly rejected by most philosophers who believe
in some fundamental notion of free will, as being insufficient to
capture their notion of free will. I claim non-determinism is
possible, but something *more* than non-determinism is not definable,
they wouldn't know it if they saw it, and their calls for it simply
represent a lack of understanding of the nature of computation.
They want something inscrutable to happen at the moment of decision
where free will is exercised-- but don't understand that the operation
of a Turing machine, although reducible to simple steps, is in the
whole as inscrutable as could be asked for.

Certainly, for modelling purposes, it may well be fruitful to think
about the mind as running a non-deterministic program. I'm all in
favor of that. Definitely, when building your AGI, think in terms of
randomized algorithms! (Then run it using a good pseudo-random no
generator if you like.)

Mike Eric: I claim that it is the very fact that you are making
Mike decisions about whether to supress pain for higher goals that is
Mike the reason you are conscious of pain. Your consciousness is the
Mike computation of a top-level decision making module (or perhaps
Mike system). If you were not making decisions waying (nuanced) pain
Mike against higher goals, you would not be conscious of the pain.

Mike Sure, emotions are designed to pressure the conscious self. But
Mike that whole setup makes no sense at all, if the conscious self is
Mike merely the execution of a deterministic program.  It's a)
Mike unnecessary - deterministically programmed computers work
Mike perfectly well without having a conscious, executive self, and
Mike b) it's sadistic in the extreme, torturing and punishing a self
Mike which has supposedly gotta do what it's gotta do anyway. It's
Mike quite bizarre in fact.

The conscious self is just the top decision level of the program.
The qualia is necessary for the kind of decisions being made. It is
in fact the act of the decision making.
As to whether its sadistic, the question is bizarly anthropomorphic.
It just is. The programming was created by evolution, which doesn't
care about sadism. However, I would claim it's not sadistic, its
wonderful. Would you rather be a zombie, or feel for several decades
like you have joy and pain?

Mike Hence Fodor:

Mike It's been increasingly clear, since Freud, that psychological
Mike processes of great complexity can be unconscious. The question
Mike then arises: what does consciousness add to what unconsciousness
Mike can achieve? To put it another way, what mental processes are
Mike there that can be performed only because the mind is conscious,
Mike and what does consciousness contribute to their performance? 
Mike Nobody has an answer to this question for any mental process
Mike whatever. As far as anybody knows, anything that our conscious
Mike minds do, they could do just as well if they were
Mike unconscious. Why then did God bother to make consciousness. What
Mike on earth could he have had in mind?  Jerry Fodor, article, You
Mike can't argue with a novel, London Review of Books, 4.3.2004

Well, obviously I have an answer, so Fodor is wrong on his face ;^)

But I think the question is somewhat confused. Consciousness is just
the level of computation we can report. Most of the computation is
unaware, because its hidden by astraction boundaries. The nature of
the qualia is equivalent to the code being run. Ours happens to be
very rich, because we have powerful programs crafted by evolution so
we can make complex decisions correctly.

Mike On the other hand, if the self is nondeterministically
Mike programmed, then everything makes sense. Then the system needs
Mike to pressure a continually wayward self, that keeps getting
Mike carried away on particular tasks , reminding it with emotions of
Mike the other goals and tasks it's ignoring. Back to work. Back to
Mike sleep. Or back to sex.

Nondeterminism is a red-herring here, as explained above. Why does it
matter if the computation sees true random bits or 

Re: [agi] Pure reason is a disease.

2007-06-17 Thread Charles D Hixson

Eric Baum wrote:

Josh On Saturday 16 June 2007 07:20:27 pm Matt Mahoney wrote:
  

--- Bo Morgan [EMAIL PROTECTED] wrote:
...

...
I claim that it is the very fact that you are making decisions about
whether to supress pain for higher goals that is the reason you are
conscious of pain. Your consciousness is the computation of a
top-level decision making module (or perhaps system). If you were not
making decisions waying (nuanced) pain against higher goals, 
you would not be conscious of the pain.


Josh Even a simplistic modular model of mind can allow for pain
Josh signals to the various modules which can be different in kind
Josh depending on which module they are reporting to.

Josh Josh
  

Consider a terminal cancer patient.
It's not the actual weighing that causes consciousness of pain, it's the 
implementation which normally allows such weighing.  This, in my 
opinion, *is* a design flaw.  Your original statement is a more useful 
implementation.  When it's impossible to do anything about the pain, one 
*should* be able to turn it off.  Unfortunately, this was not 
evolved.  After all, you might be wrong about not being able to do 
anything about it, so we evolved such that pain beyond a certain point 
cannot be ignored.  (Possibly some with advanced training and several 
years devoted to the mastery of sensation [e.g. yoga practitioners] may 
be able to ignore such pain.  I'm not convinced, and would consider 
experiments to obtain proof to be unethical.  And, in any case, they 
don't argue against my point.)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-17 Thread Joel Pitt

On 6/18/07, Charles D Hixson [EMAIL PROTECTED] wrote:

Consider a terminal cancer patient.
It's not the actual weighing that causes consciousness of pain, it's the
implementation which normally allows such weighing.  This, in my
opinion, *is* a design flaw.  Your original statement is a more useful
implementation.  When it's impossible to do anything about the pain, one
*should* be able to turn it off.  Unfortunately, this was not
evolved.  After all, you might be wrong about not being able to do
anything about it, so we evolved such that pain beyond a certain point
cannot be ignored.  (Possibly some with advanced training and several
years devoted to the mastery of sensation [e.g. yoga practitioners] may
be able to ignore such pain.  I'm not convinced, and would consider
experiments to obtain proof to be unethical.  And, in any case, they
don't argue against my point.)


I'm pretty convinced:

http://www.geocities.com/tcartz/sacrifice.htm

(although admitted they could have taken some kind of drug, but I doubt it)

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-16 Thread Eric Baum

Jiri,

you are blind when it comes to my pain too.
In fact, you are blind when it comes to many sensations within your own
brain. Cut your corpus callosum, and the other half will have sensations
that you are blind to. Do you think they are not there now, before you
cut it?


 If you use your brain as the read-write head in a Turing machine in
 a chinese room, you won't understand what's going on, although
 understanding may very well take place. (cf chapter 3 of
 WIT?). Similarly, if you use your brain as the r-w head in a Turing
 machine to run a program that feels pain, you won't feel pain,
 but that does not mean pain is not felt.

Jiri So, I guess, if computer programs have a secret social life,
Jiri some may wonder why are their gods so cruel. Sorry programs, we
Jiri are just blind when it comes to your pain, but things may change
Jiri thanks to fellow gods like Eric  Mark ;-).

Jiri Eric,

Jiri Any hint on how we should use our brains in order to process the
Jiri code so that we could experience the pain as closely as possible
Jiri to the way how [you think] machines might be experiencing it? 
Jiri What do you think might be the simplest computer system capable
Jiri of feeling pain? And when it's in pain, what are (/would be) the
Jiri symptoms?

Jiri Jiri

Jiri - This list is sponsored by AGIRI:
Jiri http://www.agiri.org/email To unsubscribe or change your
Jiri options, please go to:
Jiri http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-16 Thread Jiri Jelinek

Eric,

I'm not 100% sure if someone/something else than me feels pain, but
considerable similarities between my and other humans

- architecture
- [triggers of] internal and external pain related responses
- independent descriptions of subjective pain perceptions which
correspond in certain ways with the internal body responses

make me think it's more likely than not that other humans feel pain
the way I do.
The further you move from human like architecture the less you see the
signs of pain related behavior (e.g. the avoidance behavior). Insect
keeps trying to use badly injured body parts the same way as if they
weren't injured and (unlike in mammals) its internal responses to the
injury don't suggest that anything crazy is going on with them. And
when I look at software, I cannot find a good reason for believing it
can be in pain. The fact that we can use pain killers (and other
techniques) to get rid of pain and still remain complex systems
capable of general problem solving suggests that the pain quale takes
more than complex problem solving algorithms we are writing for our
AGI.

Regards,
Jiri

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-16 Thread Bo Morgan

I haven't kept up with this thread.  But I wanted to counter the idea of a 
simple ordering of painfulness.

A simple ordering of painfulness is one way to think about pain that might 
work in some simple systems, where resources are allocated in a serial 
fashion, but may not work in systems where resource allocation choices are 
not necessarily serial and mutually exclusive.

If our system has a heterarchy of goal-accomplishing resources--some of 
which imply others and some of which exclude others, the problem of 
simple orderings of painfulness may be not useful for thinking about 
these types of resource allocation.

--
Bo

On Sat, 16 Jun 2007, Matt Mahoney wrote:

) 
) --- Jiri Jelinek [EMAIL PROTECTED] wrote:
) 
)  Eric,
)  
)  I'm not 100% sure if someone/something else than me feels pain, but
)  considerable similarities between my and other humans
)  
)  - architecture
)  - [triggers of] internal and external pain related responses
)  - independent descriptions of subjective pain perceptions which
)  correspond in certain ways with the internal body responses
)  
)  make me think it's more likely than not that other humans feel pain
)  the way I do.
) 
) There is a simple proof for the existence of pain.  Define pain as a signal
) that an intelligent system has the goal of avoiding.  By the equivalence:
) 
)   (P = Q) = (not Q = not P)
) 
) if you didn't believe the pain was real, you would not try to avoid it.
) 
) (OK, that is proof by belief.  I omitted the step (you believe X = X is
) true).  If you believe it is true, that is good enough).
) 
)  The further you move from human like architecture the less you see the
)  signs of pain related behavior (e.g. the avoidance behavior). Insect
)  keeps trying to use badly injured body parts the same way as if they
)  weren't injured and (unlike in mammals) its internal responses to the
)  injury don't suggest that anything crazy is going on with them. And
)  when I look at software, I cannot find a good reason for believing it
)  can be in pain. The fact that we can use pain killers (and other
)  techniques) to get rid of pain and still remain complex systems
)  capable of general problem solving suggests that the pain quale takes
)  more than complex problem solving algorithms we are writing for our
)  AGI.
) 
) Pain is clearly measurable.  It obeys a strict ordering.  If you prefer
) penalty A to B and B to C, then you will prefer A to C.  You can estimate,
) e.g. that B is twice as painful as A and choose A twice vs. B once.  In AIXI,
) the reinforcement signal is a numeric quantity.
) 
) But how should pain be measured?
) 
) Pain results in a change in the behavior of an intelligent system.  If a
) system responds Y = f(X) to input X, followed by negative reinforcement, then
) the function f is changed to output Y with lower probability given input X. 
) The magnitude of this change is measurable in bits.  Let f be the function
) prior to negative reinforcement and f' be the function afterwards.  Then
) define
) 
)   dK(f) = K(f'|f) = K(f, f') - K(f)
) 
) where K() is algorithmic complexity.  Then dK(f) is the number of bits needed
) to describe the change from f to f'.
) 
) Arguments for:
) - Greater pain results in a greater change in behavior (consistent with animal
) experiments).
) - Greater intelligence implies greater possible pain (consistent with the
) belief that people feel more pain than insects or machines).
) 
) Argument against:
) - dK makes no distinction between negative and positive reinforcement, or
) neutral methods such as supervised learning or classical conditioning.
) 
) I don't know how to address this argument.  Earlier I posted a program that
) simulates a programmable logic gate that you train using reinforcement
) learning.  Note that you can achieve the same state using either positive or
) negative reinforcement, or by a neutral method such as setting the weights
) directly.
) 
) -- Matt Mahoney
) 
) 
) -- Matt Mahoney, [EMAIL PROTECTED]
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-16 Thread Matt Mahoney

--- Bo Morgan [EMAIL PROTECTED] wrote:

 
 I haven't kept up with this thread.  But I wanted to counter the idea of a 
 simple ordering of painfulness.
 
 A simple ordering of painfulness is one way to think about pain that might 
 work in some simple systems, where resources are allocated in a serial 
 fashion, but may not work in systems where resource allocation choices are 
 not necessarily serial and mutually exclusive.
 
 If our system has a heterarchy of goal-accomplishing resources--some of 
 which imply others and some of which exclude others, the problem of 
 simple orderings of painfulness may be not useful for thinking about 
 these types of resource allocation.

Can you give me an example?



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-16 Thread J Storrs Hall, PhD
On Saturday 16 June 2007 07:20:27 pm Matt Mahoney wrote:
 
 --- Bo Morgan [EMAIL PROTECTED] wrote:
 
  
  I haven't kept up with this thread.  But I wanted to counter the idea of a 
  simple ordering of painfulness.

 
 Can you give me an example?
 

Anyone who has played a competitive sport can tell you that there are lots of 
different kinds of pain, and that some are good and some are bad, and 
some are just obnoxious but to be overcome. You can't succeed at any level 
without being able to supress pain for higher goals, but you won't last long 
if you ignore the wrong kind.

Even a simplistic modular model of mind can allow for pain signals to the 
various modules which can be different in kind depending on which module they 
are reporting to.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-15 Thread Matt Mahoney
--- Lukasz Stafiniak [EMAIL PROTECTED] wrote:
 http://www.goertzel.org/books/spirit/uni3.htm  -- VIRTUAL ETHICS

The book chapter describes the need for ethics and cooperation in virtual
worlds, but does not address the question of whether machines can feel pain. 
If you feel pain, you will insist it is real, but that is because you are
trying to avoid it.   If you define pain as a signal that an intelligent
system has the goal of avoiding, then you have reduced the problem to defining
intelligence, because otherwise very simple systems feel pain, for example, a
thermostat when the room is too hot or cold.  Are animals intelligent?

You could, alternatively, define pain as something that has to be felt, but
that implies the requirement for a consciousness or self awareness, for which
there is no experimental test.

I am not aware of any definition that allows for pain in humans but not
machines that doesn't either make an arbitrary distinction between the two, or
deny that the human brain can be simulated by a computer.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-15 Thread Jiri Jelinek

Eric,


 Right. IMO roughly the same problem when processed by a
 computer..

Why should you expect running a pain program on a computer to make you
feel pain any more than when I feel pain?


I don't. The thought was: If we don't feel pain when processing
software in our pain-enabled minds, why should we expect a computer
program to feel pain (?) ..

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-15 Thread Eric Baum

Jiri Eric,
  Right. IMO roughly the same problem when processed by a 
 computer..
 
 Why should you expect running a pain program on a computer to make
 you feel pain any more than when I feel pain?

Jiri I don't. The thought was: If we don't feel pain when processing
Jiri software in our pain-enabled minds, why should we expect a
Jiri computer program to feel pain (?) ..

Your mind is not pain enabled, it is programmed to feel specific
pain in specific ways. If you use your brain as the read-write head in
a Turing machine in a chinese room, you won't understand what's
going on, although understanding may very well take place. (cf chapter
3 of WIT?). Similarly, if you use your brain as the r-w head in a
Turing machine to run a program that feels pain, you won't feel
pain, but that does not mean pain is not felt.

Jiri Regards, Jiri Jelinek

Jiri - This list is sponsored by AGIRI:
Jiri http://www.agiri.org/email To unsubscribe or change your
Jiri options, please go to:
Jiri http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Mark Waser
   Oh.  You're stuck on qualia (and zombies).  I haven't seen a good 
compact argument to convince you (and e-mail is too low band-width and 
non-interactive to do one of the longer ones).  My apologies.


   Mark

- Original Message - 
From: Jiri Jelinek [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, June 13, 2007 6:26 PM
Subject: Re: [agi] Pure reason is a disease.



Mark,


VNA..can simulate *any* substrate.


I don't see any good reason for assuming that it would be anything
more than a zombie.
http://plato.stanford.edu/entries/zombies/


unless you believe that there is some other magic involved


I would not call it magic, but we might have to look beyond 4D to
figure out how qualia really work.

But OK, let's assume for a moment that certain VNA-processed
algorithms can produce qualia as a side-effect. What factors do you
expect to play an important role in making a particular quale pleasant
vs unpleasant?

Regards,
Jiri Jelinek

On 6/11/07, Mark Waser [EMAIL PROTECTED] wrote:

Hi Jiri,

A VNA, given sufficient time, can simulate *any* substrate. 
Therefore,
if *any* substrate is capable of simulating you (and thus pain), then a 
VNA

is capable of doing so (unless you believe that there is some other magic
involved).

Remember also, it is *not* the VNA that feels pain, it is the entity
that the VNA is simulating that is feeling  the pain.

Mark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Eric Baum

Jiri James, Frank Jackson (in Epiphenomenal Qualia) defined qualia
Jiri as ...certain features of the bodily sensations especially, but
Jiri also of certain perceptual experiences, which no amount of
Jiri purely physical information includes.. :-)

One of the biggest problems with the philosophical literature, IMO, is
that philosophers often fail to recognize that one can define
various concepts in English in such a way that they make apparent
syntactic and superficial semantic sense, which are nonetheless
actually not  meaningful. My usual favorite example is,
the second before the big bang, a phrase which seems to make perfect
intuitive sense, but according to most standard GR/cosmological models
simply doesn't correspond to anything.

This problem crops up in the mathematical literature sometimes too,
but mathematicians are more effective about dealing with it. There 
is an old anecdote, I'm not sure of its veracity, of someone at
Princeton defending his PhD in math, in which he had stated various
definitions and proved various things about his class of objects, and
someone attending (if memory serves it was said to be Milnor) proved
on the spot the class was the null set.

Jackson however makes an excellent foil. In What is Thought? I took a
quote of his in which he says that 10 or 15 different specific
sensations can not possibly be explained in a physicalist manner, and
argue that each of them arises from exactly the programming one would
expect evolution to generate.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Eric Baum

Jiri Matt,
 Here is a program that feels pain.

Jiri I got the logic, but no pain when processing the code in my
Jiri mind. 

This is Frank Jackson's Mary fallacy, which I also debunk in WIT? Ch
14.

Running similar code at a conscious level won't generate your
sensation of pain because its not called by the right routines and
returning  the right format results to the right calling instructions
in your homunculus program.

 Maybe you should mention in the pain.cpp description that
Jiri it needs to be processed for long enough - so whatever is gonna
Jiri process it, it will eventually get to the 'I don't feel like
Jiri doing this any more' point. ;-)) Looks like the entropy is kind
Jiri of pain to us ( to our devices) and the negative entropy
Jiri might be kind of pain to the universe. Hopefully, when (/if)
Jiri our AGI figures this out, it will not attempt to squeeze the
Jiri Universe into a single spot to solve it.

Jiri Regards, Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread James Ratcliff
Do you know those 10-15 mentioned hard items?

I agree with your following thoughts on the matter.

We have to seperate the mystical or spiritual from the physical, or determine 
for some reason that the physical is truly missing something, that there is 
something more than that is required for life/autonomy/feelings, 
but I dont think anyone is capable of showing that yet.

So the question is,   
  Is it good enough to act and think and reason as if you have experienced the 
feeling.

James Ratcliff

Eric Baum [EMAIL PROTECTED] wrote: 
Jiri James, Frank Jackson (in Epiphenomenal Qualia) defined qualia
Jiri as ...certain features of the bodily sensations especially, but
Jiri also of certain perceptual experiences, which no amount of
Jiri purely physical information includes.. :-)

One of the biggest problems with the philosophical literature, IMO, is
that philosophers often fail to recognize that one can define
various concepts in English in such a way that they make apparent
syntactic and superficial semantic sense, which are nonetheless
actually not  meaningful. My usual favorite example is,
the second before the big bang, a phrase which seems to make perfect
intuitive sense, but according to most standard GR/cosmological models
simply doesn't correspond to anything.

This problem crops up in the mathematical literature sometimes too,
but mathematicians are more effective about dealing with it. There 
is an old anecdote, I'm not sure of its veracity, of someone at
Princeton defending his PhD in math, in which he had stated various
definitions and proved various things about his class of objects, and
someone attending (if memory serves it was said to be Milnor) proved
on the spot the class was the null set.

Jackson however makes an excellent foil. In What is Thought? I took a
quote of his in which he says that 10 or 15 different specific
sensations can not possibly be explained in a physicalist manner, and
argue that each of them arises from exactly the programming one would
expect evolution to generate.


Jiri Mark,
 VNA..can simulate *any* substrate.

Jiri I don't see any good reason for assuming that it would be
Jiri anything more than a zombie.
Jiri http://plato.stanford.edu/entries/zombies/

Zombie is another concept which seems to make perfect intuitive sense,
but IMO is not actually well defined.

If sensations correspond to the execution of certain code in a
decision making program (the nature of the sensation depending on the
coding) then I claim that everything about sensation and consciousness
can be parsimoniously and naturally explained in a way consistent with
everything we know about CS and physics and cognitive science and
various other fields.

But in this case, a zombie that makes the same decisions as a human
would be evaluating similar code and would thus essentially have the
same pain.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Yahoo! oneSearch: Finally,  mobile search that gives answers, not web links. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-14 Thread Eric Baum

James Do you know those 10-15 mentioned hard items?  I agree with
James your following thoughts on the matter.

Actually, I saw a posting where you had the same (or at least a very
similar) quote from Jackson, pain, itchiness, startling at loud
noises, smelling rose, etc.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Jiri Jelinek

Mark,


Oh.  You're stuck on qualia (and zombies)


Sort of, but not really. There is no need for qualia in order to
develop powerful AGI. I was just playing with some thoughts on
potential security implications associated with the speculation of
qualia being produced as a side-effect of certain algorithmic
complexity on VNA.

Regards,
Jiri Jelinek

On 6/14/07, Mark Waser [EMAIL PROTECTED] wrote:

Oh.  You're stuck on qualia (and zombies).  I haven't seen a good
compact argument to convince you (and e-mail is too low band-width and
non-interactive to do one of the longer ones).  My apologies.

Mark



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Jiri Jelinek

James,


determine for some reason that the physical is truly missing something


Look at twin particles = just another example of something missing in
the world as we can see it.


Is it good enough to act and think and reason as if you have

experienced the feeling.

For AGI - yes. Why not (?).

Regards,
Jiri

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread J Storrs Hall, PhD
On Thursday 14 June 2007 07:19:18 am Mark Waser wrote:
 Oh.  You're stuck on qualia (and zombies).  I haven't seen a good 
 compact argument to convince you (and e-mail is too low band-width and 
 non-interactive to do one of the longer ones).  My apologies.

The best one-liner I know is, Prove to me that *you're* not a zombie, and we 
can talk about it.

Alternatively, *I'm* a zombie, so why shouldn't my robot be one too?

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Mark Waser

I was just playing with some thoughts on
potential security implications associated with the speculation of
qualia being produced as a side-effect of certain algorithmic
complexity on VNA.


Which is, in many ways, pretty similar to my assumption that consciousness 
will be produced as a side-effect (or maybe, necessary cause of 
intelligence) on any substrate designed for and complex enough for it to 
support intelligence (and that would indeed have potential security 
implications).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Eric Baum

Jiri Eric,
 Running similar code at a conscious level won't generate your
   ^^   
 
The key word here was your.

Jiri sensation of pain because its not called by the right routines
Jiri and returning the right format results to the right calling
Jiri instructions in your homunculus program.

Jiri Right. IMO roughly the same problem when processed by a
Jiri computer..

Why should you expect running a pain program on a computer to make you
feel pain any more than when I feel pain?

Jiri Regards, Jiri Jelinek

Jiri - This list is sponsored by AGIRI:
Jiri http://www.agiri.org/email To unsubscribe or change your
Jiri options, please go to:
Jiri http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Lukasz Stafiniak

On 6/14/07, Matt Mahoney [EMAIL PROTECTED] wrote:


I don't believe this addresses the issue of machine pain.  Ethics is a complex
function which evolves to increase the reproductive success of a society, for
example, by banning sexual practices that don't lead to reproduction.  Ethics
also evolves to ban harm to other members of the group, but not to non-members
(e.g. war is allowed), and not to other species (hunting is allowed), except
to the extent that such actions would harm the group.

There is no precedent for ethics with regard to machines.  We protect machines
only to the extent that harming them harms the owner.  Nevertheless, I think
your argument about pain being related to complexity relates to the more
general principle of protecting that which resembles a human, even if that
resemblance is superficial or based on emotion.  I was reminded of this when I
was playing Grand Theft Auto III.  Besides carjacking, murder, and assorted
mayhem, the game allows you to pick up prostitutes.  Afterwards, the game
gives you the option of getting your money back by beating her to death, but I
declined.  I felt empathy for a video game character.


http://www.goertzel.org/books/spirit/uni3.htm  -- VIRTUAL ETHICS

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread James Ratcliff
Whihc compiler did you use for Human OS V1.0?
Didnt realize we had a CPP compiler out alreadyh

Jiri Jelinek [EMAIL PROTECTED] wrote: Matt,

Here is a program that feels pain.

I got the logic, but no pain when processing the code in my mind.
Maybe you should mention in the pain.cpp description that it needs to
be processed for long enough - so whatever is gonna process it, it
will eventually get to the 'I don't feel like doing this any more'
point. ;-)) Looks like the entropy is kind of pain to us ( to our
devices) and the negative entropy might be kind of pain to the
universe. Hopefully, when (/if) our AGI figures this out, it will not
attempt to squeeze the Universe into a single spot to solve it.

Regards,
Jiri Jelinek

On 6/11/07, Matt Mahoney  wrote:
 Here is a program that feels pain.  It is a simulation of a 2-input logic gate
 that you train by reinforcement learning.  It feels in the sense that it
 adjusts its behavior to avoid negative reinforcement from the user.


 /* pain.cpp - A program that can feel pleasure and pain.

 The program simulates a programmable 2-input logic gate.
 You train it by reinforcement conditioning.  You provide a pair of
 input bits (00, 01, 10, or 11).  It will output a 0 or 1.  If the
 output is correct, you reward it by entering +.  If it is wrong,
 you punish it by entering -.  You can program it this way to
 implement any 2-input logic function (AND, OR, XOR, NAND, etc).
 */

 #include 
 #include 
 using namespace std;

 int main() {
   // probability of output 1 given input 00, 01, 10, 11
   double wt[4]={0.5, 0.5, 0.5, 0.5};

   while (1) {
 cout  Please input 2 bits (00, 01, 10, 11): ;
 char b1, b2;
 cin  b1  b2;
 int input = (b1-'0')*2+(b2-'0');
 if (input = 0  input  4) {
   int response = double(rand())/RAND_MAX  wt[input];
   cout  Output =   response
 .  Please enter + if right, - if wrong: ;
   char reinforcement;
   cin  reinforcement;
   if (reinforcement == '+')
 cout  aah! :-)\n;
   else if (reinforcement == '-')
 cout  ouch! :-(\n;
   else
 continue;
   int adjustment = (reinforcement == '-') ^ response;
   if (adjustment == 0)
 wt[input] /= 2;
   else
 wt[input] = 1 - (1 - wt[input])/2;
 }
   }
 }

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Building a website is a piece of cake. 
Yahoo! Small Business gives you all the tools to get online.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-13 Thread Matt Mahoney
--- James Ratcliff [EMAIL PROTECTED] wrote:

 Whihc compiler did you use for Human OS V1.0?
 Didnt realize we had a CPP compiler out alreadyh

The purpose of my little pain-feeling program is to point out some of the
difficulties in applying ethics-for-humans to machines.  The program has two
characteristics that we normally associate with pain in humans.  First, it
expresses pain (by saying Ouch! and making a sad face), and second and more
importantly, it has a goal of avoiding pain.  Its behavior is consistent with
learning by negative reinforcement in animals.  Given an input and response
followed by negative reinforcement, it is less likely to output the same
response to that input in the future.

One might question whether animals feel pain, but I think most people will
agree that negative reinforcement stimuli typically used in animals, such as
electric shock, is painful in humans, and further, that any type of pain
signal in humans elicits a behavioral response consistent with negative
reinforcement (i.e. avoidance).

So now for the hard question.  Is it possible for an AGI or any other machine
to experience pain?

If yes, then how do you define pain in a machine?

If no, then what makes the human brain different from a computer? (assuming
you believe that humans can feel pain)


 
 Jiri Jelinek [EMAIL PROTECTED] wrote: Matt,
 
 Here is a program that feels pain.
 
 I got the logic, but no pain when processing the code in my mind.
 Maybe you should mention in the pain.cpp description that it needs to
 be processed for long enough - so whatever is gonna process it, it
 will eventually get to the 'I don't feel like doing this any more'
 point. ;-)) Looks like the entropy is kind of pain to us ( to our
 devices) and the negative entropy might be kind of pain to the
 universe. Hopefully, when (/if) our AGI figures this out, it will not
 attempt to squeeze the Universe into a single spot to solve it.
 
 Regards,
 Jiri Jelinek
 
 On 6/11/07, Matt Mahoney  wrote:
  Here is a program that feels pain.  It is a simulation of a 2-input logic
 gate
  that you train by reinforcement learning.  It feels in the sense that it
  adjusts its behavior to avoid negative reinforcement from the user.
 
 
  /* pain.cpp - A program that can feel pleasure and pain.
 
  The program simulates a programmable 2-input logic gate.
  You train it by reinforcement conditioning.  You provide a pair of
  input bits (00, 01, 10, or 11).  It will output a 0 or 1.  If the
  output is correct, you reward it by entering +.  If it is wrong,
  you punish it by entering -.  You can program it this way to
  implement any 2-input logic function (AND, OR, XOR, NAND, etc).
  */
 
  #include 
  #include 
  using namespace std;
 
  int main() {
// probability of output 1 given input 00, 01, 10, 11
double wt[4]={0.5, 0.5, 0.5, 0.5};
 
while (1) {
  cout  Please input 2 bits (00, 01, 10, 11): ;
  char b1, b2;
  cin  b1  b2;
  int input = (b1-'0')*2+(b2-'0');
  if (input = 0  input  4) {
int response = double(rand())/RAND_MAX  wt[input];
cout  Output =   response
  .  Please enter + if right, - if wrong: ;
char reinforcement;
cin  reinforcement;
if (reinforcement == '+')
  cout  aah! :-)\n;
else if (reinforcement == '-')
  cout  ouch! :-(\n;
else
  continue;
int adjustment = (reinforcement == '-') ^ response;
if (adjustment == 0)
  wt[input] /= 2;
else
  wt[input] = 1 - (1 - wt[input])/2;
  }
}
  }
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 
 
 ___
 James Ratcliff - http://falazar.com
 Looking for something...

 -
 Building a website is a piece of cake. 
 Yahoo! Small Business gives you all the tools to get online.
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread Lukasz Stafiniak

On 6/13/07, Matt Mahoney [EMAIL PROTECTED] wrote:


If yes, then how do you define pain in a machine?


A pain in a machine is the state in the machine that a person
empathizing with the machine would avoid putting the machine into,
other things being equal (that is, when there is no higher goal in
going through the pain).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread Lukasz Stafiniak

On 6/13/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:

On 6/13/07, Matt Mahoney [EMAIL PROTECTED] wrote:

 If yes, then how do you define pain in a machine?

A pain in a machine is the state in the machine that a person
empathizing with the machine would avoid putting the machine into,
other things being equal (that is, when there is no higher goal in
going through the pain).


To clarify:
(1) there exists a person empathizing with that machine
(2) this person would avoid putting the machine into the state of pain

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread Jiri Jelinek

Mark,


VNA..can simulate *any* substrate.


I don't see any good reason for assuming that it would be anything
more than a zombie.
http://plato.stanford.edu/entries/zombies/


unless you believe that there is some other magic involved


I would not call it magic, but we might have to look beyond 4D to
figure out how qualia really work.

But OK, let's assume for a moment that certain VNA-processed
algorithms can produce qualia as a side-effect. What factors do you
expect to play an important role in making a particular quale pleasant
vs unpleasant?

Regards,
Jiri Jelinek

On 6/11/07, Mark Waser [EMAIL PROTECTED] wrote:

Hi Jiri,

A VNA, given sufficient time, can simulate *any* substrate.  Therefore,
if *any* substrate is capable of simulating you (and thus pain), then a VNA
is capable of doing so (unless you believe that there is some other magic
involved).

Remember also, it is *not* the VNA that feels pain, it is the entity
that the VNA is simulating that is feeling  the pain.

Mark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread Matt Mahoney

--- Lukasz Stafiniak [EMAIL PROTECTED] wrote:

 On 6/13/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
  On 6/13/07, Matt Mahoney [EMAIL PROTECTED] wrote:
  
   If yes, then how do you define pain in a machine?
  
  A pain in a machine is the state in the machine that a person
  empathizing with the machine would avoid putting the machine into,
  other things being equal (that is, when there is no higher goal in
  going through the pain).
 
 To clarify:
 (1) there exists a person empathizing with that machine
 (2) this person would avoid putting the machine into the state of pain

I would avoid deleting all the files on my hard disk, but it has nothing to do
with pain or empathy.

Let us separate the questions of pain and ethics.  There are two independent
questions.

1. What mental or computational states correspond to pain?
2. When is it ethical to cause a state of pain?

One possible definition of pain is any signal that an intelligent system has
the goal of avoiding, for example,

- negative reinforcement in any animal capable of reinforcement learning.
- the negative of the reward signal received by an AIXI agent.
- excess heat or cold to a thermostat.

I think pain by any reasonable definition exists independently of ethics. 
Ethics is more complex.  Humans might decide, for example, that it is OK to
inflict pain on a mosquito but not a butterfly, or a cow but not a cat, or a
programmable logic gate but not a video game character.  The issue here is not
pain, but our perception of resemblance to humans.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread Lukasz Stafiniak

On 6/14/07, Matt Mahoney [EMAIL PROTECTED] wrote:


I would avoid deleting all the files on my hard disk, but it has nothing to do
with pain or empathy.

Let us separate the questions of pain and ethics.  There are two independent
questions.

1. What mental or computational states correspond to pain?
2. When is it ethical to cause a state of pain?


There is a gradation:
- pain as negative reinforcement
- pain as an emotion
- pain as a feeling

When you ask if something feels pain, then you don't ask if pain
is adequate description of some aspect in that thing or person X, but
whether X can be attributed as feeling. And this is related to the
comlexity of X, and this complexity is related with ethics.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-12 Thread Jiri Jelinek

Matt,


Here is a program that feels pain.


I got the logic, but no pain when processing the code in my mind.
Maybe you should mention in the pain.cpp description that it needs to
be processed for long enough - so whatever is gonna process it, it
will eventually get to the 'I don't feel like doing this any more'
point. ;-)) Looks like the entropy is kind of pain to us ( to our
devices) and the negative entropy might be kind of pain to the
universe. Hopefully, when (/if) our AGI figures this out, it will not
attempt to squeeze the Universe into a single spot to solve it.

Regards,
Jiri Jelinek

On 6/11/07, Matt Mahoney [EMAIL PROTECTED] wrote:

Here is a program that feels pain.  It is a simulation of a 2-input logic gate
that you train by reinforcement learning.  It feels in the sense that it
adjusts its behavior to avoid negative reinforcement from the user.


/* pain.cpp - A program that can feel pleasure and pain.

The program simulates a programmable 2-input logic gate.
You train it by reinforcement conditioning.  You provide a pair of
input bits (00, 01, 10, or 11).  It will output a 0 or 1.  If the
output is correct, you reward it by entering +.  If it is wrong,
you punish it by entering -.  You can program it this way to
implement any 2-input logic function (AND, OR, XOR, NAND, etc).
*/

#include iostream
#include cstdlib
using namespace std;

int main() {
  // probability of output 1 given input 00, 01, 10, 11
  double wt[4]={0.5, 0.5, 0.5, 0.5};

  while (1) {
cout  Please input 2 bits (00, 01, 10, 11): ;
char b1, b2;
cin  b1  b2;
int input = (b1-'0')*2+(b2-'0');
if (input = 0  input  4) {
  int response = double(rand())/RAND_MAX  wt[input];
  cout  Output =   response
.  Please enter + if right, - if wrong: ;
  char reinforcement;
  cin  reinforcement;
  if (reinforcement == '+')
cout  aah! :-)\n;
  else if (reinforcement == '-')
cout  ouch! :-(\n;
  else
continue;
  int adjustment = (reinforcement == '-') ^ response;
  if (adjustment == 0)
wt[input] /= 2;
  else
wt[input] = 1 - (1 - wt[input])/2;
}
  }
}


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-11 Thread James Ratcliff
Two different responses to this type of arguement.

Once you simulate something to the fact that we cant tell the difference 
between it in any way, then it IS that something for most all intents and 
purposes as far as the tests you have go.
If it walks like a human, talks like a human, then for all those aspects it is 
a human.

Second, to say it CANNOT be programmed, you must define IT much more closely.  
For cutaneous pain and humans, it appears to me that we have pain sensors, so 
if we are being pricked on the arm, the nerves there send the message to the 
brain, and the brain reacts to it there.

We an recreate this fairly easily using VNA with some robotic touch sensors, 
and saying that past this threshhold it becomes painful and can be 
damaging, and we will send a message to the CPU.

If there is nothing magical about the pain sensation, then there is no reason 
we cant recreate it.

James Ratcliff


Jiri Jelinek [EMAIL PROTECTED] wrote: Mark,

Again, simulation - sure, why not. On VNA (Neumann's architecture) - I
don't think so - IMO not advanced enough to support qualia. Yes, I do
believe qualia exists (= I do not agree with all Dennett's views, but
I think his views are important to consider.) I wrote tons of pro
software (using many languages) for a bunch of major projects but I
have absolutely no idea how to write some kind of feelPain(intensity)
fn that could cause real pain sensation to an AI system running on my
(VNA based) computer. BTW I often do the test driven development so I
would probably first want to write a test procedure for real pain. If
you can write at least a pseudo-code for that then let me know. When
talking about VNA, this is IMO a pure fiction. And even *IF* it
actually was somehow possible, I don't think it would be clever to
allow adding such a code to our AGI. In VNA-processing, there is no
room for subjective feelings. VNA = cold data  cold logic (no
matter how complex your algorithms get) because the CPU (with its set
of primitive instructions) - just like the other components - was not
designed to handle anything more.

Jiri

On 6/10/07, Mark Waser  wrote:


  For feelings - like pain - there is a problem. But I don't feel like
  spending much time explaining it little by little through many emails.
  There are books and articles on this topic.

 Indeed there are and they are entirely unconvincing.  Anyone who writes
 something can get it published.

 If you can't prove that you're not a simulation, then you certainly can't
 prove that pain that really *hurts* isn't possible.  I'll just simply
 argue that you *are* a simulation, that you do experience pain that really
 *hurts*, and therefore, my point is proved.  I'd say that the burden of
 proof is upon you or anyone else who makes claims like Why you can't make
 a computer that feels pain.

 I've read all of Dennett's books.  I would argue that there are far more
 people with credentials who disagree with him than agree.  His arguments
 really don't boil down to anything better than I don't see how it happens
 or how to do it so it isn't possible.

 I still haven't seen you respond to the simulation argument (which I feel
 *is* the stake through Dennett's argument) but if you want to stop debating
 without doing so that's certainly cool.

 Mark
  This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Choose the right car based on your needs.  Check out Yahoo! Autos new Car 
Finder tool.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-11 Thread Jiri Jelinek

James,

Frank Jackson (in Epiphenomenal Qualia) defined qualia as
...certain features of the bodily sensations especially, but also of
certain perceptual experiences, which no amount of purely physical
information includes.. :-)


If it walks like a human, talks like a human, then for all those

aspects it is a human

If it feels like a human and if Frank is correct :-) then the system
may, under certain circumstances, want to modify given goals based on
preferences that could not be found in its memory (nor in CPU
registers etc.). So, with some assumptions, we might be able to write
some code for the feelPainTest procedure, but no idea for the actual
feelPain procedure.

Jiri

On 6/11/07, James Ratcliff [EMAIL PROTECTED] wrote:

Two different responses to this type of arguement.

Once you simulate something to the fact that we cant tell the difference
between it in any way, then it IS that something for most all intents and
purposes as far as the tests you have go.
If it walks like a human, talks like a human, then for all those aspects it
is a human.

Second, to say it CANNOT be programmed, you must define IT much more
closely.  For cutaneous pain and humans, it appears to me that we have pain
sensors, so if we are being pricked on the arm, the nerves there send the
message to the brain, and the brain reacts to it there.

We an recreate this fairly easily using VNA with some robotic touch sensors,
and saying that past this threshhold it becomes painful and can be
damaging, and we will send a message to the CPU.

If there is nothing magical about the pain sensation, then there is no
reason we cant recreate it.

James Ratcliff


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-11 Thread Matt Mahoney
Below is a program that can feel pain.  It is a simulation of a programmable
2-input logic gate that you train using reinforcement conditioning.


/* pain.cpp

This program simulates a programmable 2-input logic gate.
You train it by reinforcement conditioning.  You provide a pair of 
input bits (00, 01, 10, or 11).  It will output a 0 or 1.  If the
output is correct, you reward it by entering +.  If it is wrong,
you punish it by entering -.  You can program it this way to
implement any 2-input logic function (AND, OR, XOR, NAND, etc).
*/

#include iostream
#include cstdlib
using namespace std;

int main() {
  // probability of output 1 given input 00, 01, 10, 11
  double wt[4]={0.5, 0.5, 0.5, 0.5};

  while (1) {
cout  Please input 2 bits (00, 01, 10, 11): ;
char b1, b2;
cin  b1  b2;
int input = (b1-'0')*2+(b2-'0');
if (input = 0  input  4) {
  int response = double(rand())/RAND_MAX  wt[input];
  cout  Output =   response 
.  Please enter + if right, - if wrong: ;
  char reinforcement;
  cin  reinforcement;
  if (reinforcement == '+')
cout  aah! :-)\n;
  else if (reinforcement == '-')
cout  ouch! :-(\n;
  else
continue;
  int adjustment = (reinforcement == '-') ^ response;
  if (adjustment == 0)
wt[input] /= 2;
  else
wt[input] = 1 - (1 - wt[input])/2;
}
  }
}



--- Jiri Jelinek [EMAIL PROTECTED] wrote:

 Mark,
 
 Again, simulation - sure, why not. On VNA (Neumann's architecture) - I
 don't think so - IMO not advanced enough to support qualia. Yes, I do
 believe qualia exists (= I do not agree with all Dennett's views, but
 I think his views are important to consider.) I wrote tons of pro
 software (using many languages) for a bunch of major projects but I
 have absolutely no idea how to write some kind of feelPain(intensity)
 fn that could cause real pain sensation to an AI system running on my
 (VNA based) computer. BTW I often do the test driven development so I
 would probably first want to write a test procedure for real pain. If
 you can write at least a pseudo-code for that then let me know. When
 talking about VNA, this is IMO a pure fiction. And even *IF* it
 actually was somehow possible, I don't think it would be clever to
 allow adding such a code to our AGI. In VNA-processing, there is no
 room for subjective feelings. VNA = cold data  cold logic (no
 matter how complex your algorithms get) because the CPU (with its set
 of primitive instructions) - just like the other components - was not
 designed to handle anything more.
 
 Jiri
 
 On 6/10/07, Mark Waser [EMAIL PROTECTED] wrote:
 
 
   For feelings - like pain - there is a problem. But I don't feel like
   spending much time explaining it little by little through many emails.
   There are books and articles on this topic.
 
  Indeed there are and they are entirely unconvincing.  Anyone who writes
  something can get it published.
 
  If you can't prove that you're not a simulation, then you certainly can't
  prove that pain that really *hurts* isn't possible.  I'll just simply
  argue that you *are* a simulation, that you do experience pain that
 really
  *hurts*, and therefore, my point is proved.  I'd say that the burden of
  proof is upon you or anyone else who makes claims like Why you can't
 make
  a computer that feels pain.
 
  I've read all of Dennett's books.  I would argue that there are far more
  people with credentials who disagree with him than agree.  His arguments
  really don't boil down to anything better than I don't see how it happens
  or how to do it so it isn't possible.
 
  I still haven't seen you respond to the simulation argument (which I feel
  *is* the stake through Dennett's argument) but if you want to stop
 debating
  without doing so that's certainly cool.
 
  Mark


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


RE: [agi] Pure reason is a disease.

2007-06-11 Thread Derek Zahn
Matt Mahoney writes: Below is a program that can feel pain. It is a simulation 
of a programmable 2-input logic gate that you train using reinforcement 
conditioning.
Is it ethical to compile and run this program?
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-11 Thread Matt Mahoney
Here is a program that feels pain.  It is a simulation of a 2-input logic gate
that you train by reinforcement learning.  It feels in the sense that it
adjusts its behavior to avoid negative reinforcement from the user.


/* pain.cpp - A program that can feel pleasure and pain.

The program simulates a programmable 2-input logic gate.
You train it by reinforcement conditioning.  You provide a pair of 
input bits (00, 01, 10, or 11).  It will output a 0 or 1.  If the
output is correct, you reward it by entering +.  If it is wrong,
you punish it by entering -.  You can program it this way to
implement any 2-input logic function (AND, OR, XOR, NAND, etc).
*/

#include iostream
#include cstdlib
using namespace std;

int main() {
  // probability of output 1 given input 00, 01, 10, 11
  double wt[4]={0.5, 0.5, 0.5, 0.5};

  while (1) {
cout  Please input 2 bits (00, 01, 10, 11): ;
char b1, b2;
cin  b1  b2;
int input = (b1-'0')*2+(b2-'0');
if (input = 0  input  4) {
  int response = double(rand())/RAND_MAX  wt[input];
  cout  Output =   response 
.  Please enter + if right, - if wrong: ;
  char reinforcement;
  cin  reinforcement;
  if (reinforcement == '+')
cout  aah! :-)\n;
  else if (reinforcement == '-')
cout  ouch! :-(\n;
  else
continue;
  int adjustment = (reinforcement == '-') ^ response;
  if (adjustment == 0)
wt[input] /= 2;
  else
wt[input] = 1 - (1 - wt[input])/2;
}
  }
}


--- Jiri Jelinek [EMAIL PROTECTED] wrote:

 James,
 
 Frank Jackson (in Epiphenomenal Qualia) defined qualia as
 ...certain features of the bodily sensations especially, but also of
 certain perceptual experiences, which no amount of purely physical
 information includes.. :-)
 
 If it walks like a human, talks like a human, then for all those
 aspects it is a human
 
 If it feels like a human and if Frank is correct :-) then the system
 may, under certain circumstances, want to modify given goals based on
 preferences that could not be found in its memory (nor in CPU
 registers etc.). So, with some assumptions, we might be able to write
 some code for the feelPainTest procedure, but no idea for the actual
 feelPain procedure.
 
 Jiri
 
 On 6/11/07, James Ratcliff [EMAIL PROTECTED] wrote:
  Two different responses to this type of arguement.
 
  Once you simulate something to the fact that we cant tell the difference
  between it in any way, then it IS that something for most all intents and
  purposes as far as the tests you have go.
  If it walks like a human, talks like a human, then for all those aspects
 it
  is a human.
 
  Second, to say it CANNOT be programmed, you must define IT much more
  closely.  For cutaneous pain and humans, it appears to me that we have
 pain
  sensors, so if we are being pricked on the arm, the nerves there send the
  message to the brain, and the brain reacts to it there.
 
  We an recreate this fairly easily using VNA with some robotic touch
 sensors,
  and saying that past this threshhold it becomes painful and can be
  damaging, and we will send a message to the CPU.
 
  If there is nothing magical about the pain sensation, then there is no
  reason we cant recreate it.
 
  James Ratcliff
 



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-11 Thread J Storrs Hall, PhD
On Monday 11 June 2007 03:22:04 pm Matt Mahoney wrote:
 /* pain.cpp - A program that can feel pleasure and pain.
 ...

Ouch! :-)

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


RE: [agi] Pure reason is a disease.

2007-06-11 Thread Matt Mahoney
--- Derek Zahn [EMAIL PROTECTED] wrote:

 Matt Mahoney writes: Below is a program that can feel pain. It is a
 simulation of a programmable 2-input logic gate that you train using
 reinforcement conditioning.

 Is it ethical to compile and run this program?

Well, that is a good question.  Ethics is very complex.  It is not just a
question of inflicting pain.  Is it ethical to punish a child for stealing? 
Is it ethical to swat a fly?  Is it ethical to give people experimental drugs?

(Apologies for posting the program twice.  My first post was delayed several
hours).


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-11 Thread James Ratcliff
And here's the human psuedocode:

1. Hold Knife above flame until red.
2. Place knife on arm.
3. a. Accept Pain sensation 
b. Scream or respond as necessary
4. Press knife harder into skin.
5. Goto 3, until 6.
6. Pass out from pain



Matt Mahoney [EMAIL PROTECTED] wrote: Below is a program that can feel pain.  
It is a simulation of a programmable
2-input logic gate that you train using reinforcement conditioning.


/* pain.cpp

This program simulates a programmable 2-input logic gate.
You train it by reinforcement conditioning.  You provide a pair of 
input bits (00, 01, 10, or 11).  It will output a 0 or 1.  If the
output is correct, you reward it by entering +.  If it is wrong,
you punish it by entering -.  You can program it this way to
implement any 2-input logic function (AND, OR, XOR, NAND, etc).
*/

#include 
#include 
using namespace std;

int main() {
  // probability of output 1 given input 00, 01, 10, 11
  double wt[4]={0.5, 0.5, 0.5, 0.5};

  while (1) {
cout  Please input 2 bits (00, 01, 10, 11): ;
char b1, b2;
cin  b1  b2;
int input = (b1-'0')*2+(b2-'0');
if (input = 0  input  4) {
  int response = double(rand())/RAND_MAX  wt[input];
  cout  Output =   response 
.  Please enter + if right, - if wrong: ;
  char reinforcement;
  cin  reinforcement;
  if (reinforcement == '+')
cout  aah! :-)\n;
  else if (reinforcement == '-')
cout  ouch! :-(\n;
  else
continue;
  int adjustment = (reinforcement == '-') ^ response;
  if (adjustment == 0)
wt[input] /= 2;
  else
wt[input] = 1 - (1 - wt[input])/2;
}
  }
}



--- Jiri Jelinek  wrote:

 Mark,
 
 Again, simulation - sure, why not. On VNA (Neumann's architecture) - I
 don't think so - IMO not advanced enough to support qualia. Yes, I do
 believe qualia exists (= I do not agree with all Dennett's views, but
 I think his views are important to consider.) I wrote tons of pro
 software (using many languages) for a bunch of major projects but I
 have absolutely no idea how to write some kind of feelPain(intensity)
 fn that could cause real pain sensation to an AI system running on my
 (VNA based) computer. BTW I often do the test driven development so I
 would probably first want to write a test procedure for real pain. If
 you can write at least a pseudo-code for that then let me know. When
 talking about VNA, this is IMO a pure fiction. And even *IF* it
 actually was somehow possible, I don't think it would be clever to
 allow adding such a code to our AGI. In VNA-processing, there is no
 room for subjective feelings. VNA = cold data  cold logic (no
 matter how complex your algorithms get) because the CPU (with its set
 of primitive instructions) - just like the other components - was not
 designed to handle anything more.
 
 Jiri
 
 On 6/10/07, Mark Waser  wrote:
 
 
   For feelings - like pain - there is a problem. But I don't feel like
   spending much time explaining it little by little through many emails.
   There are books and articles on this topic.
 
  Indeed there are and they are entirely unconvincing.  Anyone who writes
  something can get it published.
 
  If you can't prove that you're not a simulation, then you certainly can't
  prove that pain that really *hurts* isn't possible.  I'll just simply
  argue that you *are* a simulation, that you do experience pain that
 really
  *hurts*, and therefore, my point is proved.  I'd say that the burden of
  proof is upon you or anyone else who makes claims like Why you can't
 make
  a computer that feels pain.
 
  I've read all of Dennett's books.  I would argue that there are far more
  people with credentials who disagree with him than agree.  His arguments
  really don't boil down to anything better than I don't see how it happens
  or how to do it so it isn't possible.
 
  I still haven't seen you respond to the simulation argument (which I feel
  *is* the stake through Dennett's argument) but if you want to stop
 debating
  without doing so that's certainly cool.
 
  Mark


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Building a website is a piece of cake. 
Yahoo! Small Business gives you all the tools to get online.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-10 Thread Jiri Jelinek

Mark,


Could you specify some of those good reasons (i.e. why a sufficiently

large/fast enough von Neumann architecture isn't sufficient substrate
for a sufficiently complex mind to be conscious and feel -- or, at
least, to believe itself to be conscious and believe itself to feel

For being [/believing to be] conscious - no - I don't see a problem
with coding that.

For feelings - like pain - there is a problem. But I don't feel like
spending much time explaining it little by little through many emails.
There are books and articles on this topic. Let me just emphasize that
I'm talking about pain that really *hurts* (note: with some drugs, you
can alter the sensation of pain so that patients still report feeling
pain of the same intensity - they just no longer mind it). There are
levels of the qualitative aspect of pain and other things which make
it more difficult to really cover the topic well. Start with Dennett's
book Why you can't make a computer that feels pain if you are really
interested. BTW some argue about this stuff for years (just like those
never ending AI definition exchanges). I guess we better spend more
time with more practical AGI stuff (like KR, UI  problem solving).

Jiri

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-10 Thread Mark Waser
 For feelings - like pain - there is a problem. But I don't feel like
 spending much time explaining it little by little through many emails.
 There are books and articles on this topic. 

Indeed there are and they are entirely unconvincing.  Anyone who writes 
something can get it published.

If you can't prove that you're not a simulation, then you certainly can't prove 
that pain that really *hurts* isn't possible.  I'll just simply argue that 
you *are* a simulation, that you do experience pain that really *hurts*, and 
therefore, my point is proved.  I'd say that the burden of proof is upon you or 
anyone else who makes claims like Why you can't make a computer that feels 
pain.

I've read all of Dennett's books.  I would argue that there are far more people 
with credentials who disagree with him than agree.  His arguments really don't 
boil down to anything better than I don't see how it happens or how to do it 
so it isn't possible.

I still haven't seen you respond to the simulation argument (which I feel *is* 
the stake through Dennett's argument) but if you want to stop debating without 
doing so that's certainly cool.

Mark

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-07 Thread J Storrs Hall, PhD
Yep. It's clear that modelling others in a social context was at least one of 
the strong evolutionary drivers to human-level cognition. Reciprocal altruism 
(in, e.g. bats) is strongly correlated with increased brain size (compared to 
similar animals without it, e.g. other bats).

It's clearly to our advantage to be able to model others, and this gives us at 
least the mechanism to model ourselves. The evolutionary theorist (cf. 
Pinker) will instantly think in terms of an arms race -- while others are 
trying to figure us out, we're trying to fool them. But what's less generally 
appreciated is that there is a possibly even stronger counter-force in the 
value of being easy to understand (cf Axelrod's personality traits). In 
that case you may even form a self-model and then use it to guide your 
further actions rather than its merely being a description of them. 

Josh



On Wednesday 06 June 2007 09:08:40 pm Samantha Atkins wrote:
 That matches my intuitions mostly.  If the system must model itself in  
 the context of the domain it operates upon and especially if it must  
 model perceptions of itself from the point of view of other actors in  
 that domain, then I think it very likely that it can become  
 conscious / self-aware.   It might be necessary that it takes a  
 requirement to explain itself to other beings with self-awareness to  
 kick it off.   I am not sure if some of the feral children studies  
 lend some support to such.  If a human being, which we know (ok, no  
 quibbles for a moment) is conscious / self-aware,  has less self- 
 awareness without significant interaction with other humans then this  
 may say something interesting about how and why self-awareness develops.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-06 Thread Joel Pitt

On 6/3/07, Jiri Jelinek [EMAIL PROTECTED] wrote:

Further, prove that pain (or more preferably sensation in general) isn't an
emergent property of sufficient complexity.

Talking about Neumann's architecture - I don't see how could increases
in complexity of rules used for switching Boolean values lead to new
sensations. It can represent a lot in a way that can be very
meaningful to us in terms of feelings, but from the system's
perspective it's nothing more than a bunch of 1s and 0s.


In a similar vein I could argue that humans don't feel anything
because they are simple made of (sub)atomic particles. Why should we
believe that matter can feel?

It's all about the pattern, not the substrate. And if a feeling AGI
requires quantum mechanics (I don't believe it does) then maybe we'll
just need to wait for quantum computing.

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-06 Thread Samantha  Atkins


On Jun 5, 2007, at 9:17 AM, J Storrs Hall, PhD wrote:


On Tuesday 05 June 2007 10:51:54 am Mark Waser wrote:
It's my belief/contention that a sufficiently complex mind will be  
conscious

and feel -- regardless of substrate.


Sounds like Mike the computer in Moon is a Harsh Mistress  
(Heinlein). Note,

btw, that Mike could be programmed in Loglan (predecessor of Lojban).

I think a system can get arbitrarily complex without being conscious  
--
consciousness is a specific kind of model-based, summarizing, self- 
monitoring

architecture.


That matches my intuitions mostly.  If the system must model itself in  
the context of the domain it operates upon and especially if it must  
model perceptions of itself from the point of view of other actors in  
that domain, then I think it very likely that it can become  
conscious / self-aware.   It might be necessary that it takes a  
requirement to explain itself to other beings with self-awareness to  
kick it off.   I am not sure if some of the feral children studies  
lend some support to such.  If a human being, which we know (ok, no  
quibbles for a moment) is conscious / self-aware,  has less self- 
awareness without significant interaction with other humans then this  
may say something interesting about how and why self-awareness develops.




- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
Your brain can be simulated on a large/fast enough von Neumann 
architecture.

From the behavioral perspective (which is good enough for AGI) - yes,
but that's not the whole story when it comes to human brain. In our
brains, information not only is and moves but also feels.


It's my belief/contention that a sufficiently complex mind will be conscious 
and feel -- regardless of substrate.



It's meaningless to take action without feelings - you are practically
dead - there is just some mechanical device trying to make moves in
your way of thinking. But thinking is not our goal. Feeling is. The
goal is to not have goal(s) and safely feel the best forever.


Feel the best forever is a hard-wired goal.  What makes you feel good are 
hard-wired goals in some cases and trained goals in other cases.  As I've 
said before, I believe that human beings only have four primary goals (being 
safe, feeling good, looking good, and being right).  The latter two, to me, 
are clearly sub-goals but it's equally clear that some people have 
mistakenly raised them to the level of primary goals.


If you can't, then you must either concede that feeling pain is possible 
for a

simulated entity..

It is possible. There are just good reasons to believe that it takes
more than a bunch of semiconductor based slots storing 1s and 0s.


Could you specify some of those good reasons (i.e. why a sufficiently 
large/fast enough von Neumann architecture isn't sufficient substrate for a 
sufficiently complex mind to be conscious and feel -- or, at least, to 
believe itself to be conscious and believe itself to feel and isn't that a 
nasty thought twist? :-)?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread James Ratcliff
To get any further with feelings you again have to have a better definition 
and examples of what you are dealing with.

In humans, most feelings and emotions are brought about by chemical changes 
in the body yes?  Then from there it becomes knowledge in the brain, which we 
use to make decisions and react upon.

Is there more to it than that?  (simplified overview)

Simply replacing the chemical parts with machine code easily allows an AGI to 
feel most of these feelings.  Mechanical sensors would allow a robot to 
feel/sense being touched or hit, and a brain could react upon this.  Even a 
simulated AGI virtual agent could and does indicate a prefence for Not being 
shot, or being in pain, and running away, and could easily show preference 
like/feeling for certain faces or persons it find 'appealing'.  
   This can all be done using algorithms, and learned / preferred behavior of 
the bot with no mysterious 'extra' bits needed.

Many people have posted and argue the ambiguous statement:
  But an AGI cant feel feelings.
I'm not really sure what this kind of sentence means, because we cant even say 
that or how humans feel feelings
  If we can define these in some way that is devoid of all logic, and has 
something that an AGI CANT do, I would be interested.

An AGI should be able, and will benefit from having feelings, will act reason, 
and believe that it has these feelings, and will give it a greater range of 
abilities later in its life cycle.

James Ratcliff

Mark Waser [EMAIL PROTECTED] wrote: Your brain can be simulated on a 
large/fast enough von Neumann 
architecture.
 From the behavioral perspective (which is good enough for AGI) - yes,
 but that's not the whole story when it comes to human brain. In our
 brains, information not only is and moves but also feels.

It's my belief/contention that a sufficiently complex mind will be conscious 
and feel -- regardless of substrate.

 It's meaningless to take action without feelings - you are practically
 dead - there is just some mechanical device trying to make moves in
 your way of thinking. But thinking is not our goal. Feeling is. The
 goal is to not have goal(s) and safely feel the best forever.

Feel the best forever is a hard-wired goal.  What makes you feel good are 
hard-wired goals in some cases and trained goals in other cases.  As I've 
said before, I believe that human beings only have four primary goals (being 
safe, feeling good, looking good, and being right).  The latter two, to me, 
are clearly sub-goals but it's equally clear that some people have 
mistakenly raised them to the level of primary goals.

 If you can't, then you must either concede that feeling pain is possible 
 for a
 simulated entity..
 It is possible. There are just good reasons to believe that it takes
 more than a bunch of semiconductor based slots storing 1s and 0s.

Could you specify some of those good reasons (i.e. why a sufficiently 
large/fast enough von Neumann architecture isn't sufficient substrate for a 
sufficiently complex mind to be conscious and feel -- or, at least, to 
believe itself to be conscious and believe itself to feel 
nasty thought twist? :-)?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
 
-
 Get your own web address.
 Have a HUGE year through Yahoo! Small Business.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread J Storrs Hall, PhD
On Tuesday 05 June 2007 10:51:54 am Mark Waser wrote:
 It's my belief/contention that a sufficiently complex mind will be conscious 
 and feel -- regardless of substrate.

Sounds like Mike the computer in Moon is a Harsh Mistress (Heinlein). Note, 
btw, that Mike could be programmed in Loglan (predecessor of Lojban).

I think a system can get arbitrarily complex without being conscious -- 
consciousness is a specific kind of model-based, summarizing, self-monitoring 
architecture. There has to be a certain system complexity for it to make any 
sense, but something the complexity of say Linux could be made conscious (and 
would work better if it were). That said, I think consciousness is necessary 
but not sufficient for moral agency.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
I think a system can get arbitrarily complex without being conscious -- 
consciousness is a specific kind of model-based, summarizing, 
self-monitoring

architecture.


Yes.  That is a good clarification of what I meant rather than what I said.


That said, I think consciousness is necessary
but not sufficient for moral agency.


On the other hand, I don't believe that consciousness is necessary for moral 
agency.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:


 I think a system can get arbitrarily complex without being conscious --
 consciousness is a specific kind of model-based, summarizing,
 self-monitoring
 architecture.

Yes.  That is a good clarification of what I meant rather than what I said.

 That said, I think consciousness is necessary
 but not sufficient for moral agency.

On the other hand, I don't believe that consciousness is necessary for moral
agency.


What a provocative statement!

Isn't it indisputable that agency is necessarily on behalf of some
perceived entity (a self) and that assessment of the morality of any
decision is always only relative to a subjective model of rightness?
In other words, doesn't the difference between it works and it's
moral hinge on the role of a subjective self as actor?

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser

Isn't it indisputable that agency is necessarily on behalf of some
perceived entity (a self) and that assessment of the morality of any
decision is always only relative to a subjective model of rightness?


I'm not sure that I should dive into this but I'm not the brightest 
sometimes . . . . :-)


If someone else were to program a decision-making (but not conscious or 
self-conscious) machine to always recommend for what you personally (Jef) 
would find a moral act and always recommend against what you personally 
would find an immoral act, would that machine be acting morally?


hopefully, we're not just debating the term agency


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:

 Isn't it indisputable that agency is necessarily on behalf of some
 perceived entity (a self) and that assessment of the morality of any
 decision is always only relative to a subjective model of rightness?

I'm not sure that I should dive into this but I'm not the brightest
sometimes . . . . :-)

If someone else were to program a decision-making (but not conscious or
self-conscious) machine to always recommend for what you personally (Jef)
would find a moral act and always recommend against what you personally
would find an immoral act, would that machine be acting morally?

hopefully, we're not just debating the term agency


I do think its a misuse of agency to ascribe moral agency to what is
effectively only a tool.  Even a human, operating under duress, i.e.
as a tool for another, should be considered as having diminished or no
moral agency, in my opinion.

Oh well.  Thanks Mark for your response.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
 I do think its a misuse of agency to ascribe moral agency to what is
 effectively only a tool.  Even a human, operating under duress, i.e.
 as a tool for another, should be considered as having diminished or no
 moral agency, in my opinion.

So, effectively, it sounds like agency requires both consciousness and willful 
control (and this debate actually has nothing to do with moral at all).

I can agree with that.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:



 I do think its a misuse of agency to ascribe moral agency to what is
 effectively only a tool.  Even a human, operating under duress, i.e.
 as a tool for another, should be considered as having diminished or no
 moral agency, in my opinion.

So, effectively, it sounds like agency requires both consciousness and
willful control (and this debate actually has nothing to do with moral at
all).

I can agree with that.


Funny, I thought there was nothing of significance between our
positions; now it seems clear that there is.

I would not claim that agency requires consciousness; it is necessary
only that an agent acts on its environment so as to minimize the
difference between the external environment and its internal model of
the preferred environment  The perception of agency inheres in an
observer, which might or might not include the agent itself.  An ant
(while presumably lacking self-awareness) can be seen as its own agent
(promoting its own internal values) as well as being an agent of the
colony.  A person is almost always their own agent to some extent, and
commonly seen as acting as an agent of others.  A newborn baby is seen
as an agent of itself, reaching for the nipple, even while it yet
lacks the self-awareness to recognize its own agency.  A simple robot,
autonomous but lacking self-awareness is an agent promoting the values
expressed by its design, and possibly also an agent of its designer to
the extent that the designer's preferences are reflected in the
robot's preferences.

Moral agency, however, requires both agency and self-awareness.  Moral
agency is not about the acting but the deciding, and is necessarily
over a context that includes the values of at least one other agent.
This requirement of expanded decision-making context is what makes the
difference between what is seen as merely good (to an individual)
and what is seen as right or moral (to a group.)Morality is a
function of a group, not of an individual. The difference entails
**agreement**, thus decision-making context greater than a single
agent, thus recognition of self in order to recognize the existence of
the greater context including both self and other agency.

Now we are back to the starting point, where I saw your statement
about the possibility of moral agency sans consciousness as a
provocative one.  Can you see why?

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:

 I would not claim that agency requires consciousness; it is necessary
 only that an agent acts on its environment so as to minimize the
 difference between the external environment and its internal model of
 the preferred environment

OK.

 Moral agency, however, requires both agency and self-awareness.  Moral
 agency is not about the acting but the deciding

So you're saying that deciding requires self-awareness?


No, I'm saying that **moral** decision-making requires self-awareness.



 This requirement of expanded decision-making context is what makes the
 difference between what is seen as merely good (to an individual)
 and what is seen as right or moral (to a group.)Morality is a
 function of a group, not of an individual. The difference entails
 **agreement**, thus decision-making context greater than a single
 agent, thus recognition of self in order to recognize the existence of
 the greater context including both self and other agency.

So you're saying that if you act morally without recognizing the greater
context then you are not acting morally (i.e. you are acting amorally --
without morals -- as opposed to immorally -- against morals).


Yes, a machine that has been programed to carry out acts which others
have decided are moral, or a human who follows religious (or military)
imperatives is not displaying moral agency.



I would then argue that we humans *rarely* recognize this greater context --
and then most frequently act upon this realization for the wrong reasons
(i.e. fear of ostracism, punishment, etc.) instead of moral reasons
because realistically most of us are hard-wired by evolution to feel in
accordance with most of what is regarded as moral (with the exceptions often
being psychopaths).


Yes!  Our present-day moral agency is limited due to what we might
lump under the term lack of awareness. Most of what is presently
considered morality is actually only distilled patterns of
cooperative behavior that worked in the environment of evolutionary
adaptation, now encoded into our innate biological preferences as well
as cultural artifacts such as the Ten Commandments.

A more accurate understanding of morality or decision-making seen as
right, and extensible beyond the EEA to our increasingly complex
world might be something like the following:

Decisions are seen as increasingly moral to the extent that they enact
principles assessed as promoting an increasing context of increasingly
coherent values over increasing scope of consequences.

For the sake of brevity here I'll resist the temptation to forestall
some anticipated objections.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
 A more accurate understanding of morality or decision-making seen as
 right, and extensible beyond the EEA to our increasingly complex
 world might be something like the following:
 
 Decisions are seen as increasingly moral to the extent that they enact
 principles assessed as promoting an increasing context of increasingly
 coherent values over increasing scope of consequences.

OK.  I would contend that a machine can be programmed to make decisions to 
enact principles assessed as promoting an increasing context of increasingly 
coherent values over increasing scope of consequences and that it can be 
programmed in this fashion without it attaining consciousness.

You did say machine that has been programmed to carry out acts which others 
have decided are moral . . . is not displaying moral agency but I interpreted 
this as the machine merely following rules of what the human has already 
decided as enacting principles assessed . . . (i.e. the machine is not doing 
the actual morality checking itself)

So . . . my next two questions are 
  a.. Do you believe that a machine programmed to make decisions to enact 
principles assessed as promoting an increasing context of increasingly coherent 
values over increasing scope of consequences (I assume that it has/needs an 
awesome knowledge base and very sophisticated rules and evaluation criteria) is 
still not acting morally?  (and, if so, why?)
  b.. Or, do you believe that it is not possible to program a machine in this 
fashion without giving it consciousness.
Also, BTW, with this definition of morality, I would argue that it is a very 
rare human that makes moral decisions any appreciable percent of the time (and 
those that do have ingrained it as reflex -- so do those reflexes count as 
moral decisions?  Or are they not moral since they're not conscious decisions 
at the time of choice?:-).

Mark

- Original Message - 
From: Jef Allbright [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, June 05, 2007 5:45 PM
Subject: Re: [agi] Pure reason is a disease.


 On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:
  I would not claim that agency requires consciousness; it is necessary
  only that an agent acts on its environment so as to minimize the
  difference between the external environment and its internal model of
  the preferred environment

 OK.

  Moral agency, however, requires both agency and self-awareness.  Moral
  agency is not about the acting but the deciding

 So you're saying that deciding requires self-awareness?
 
 No, I'm saying that **moral** decision-making requires self-awareness.
 
 
  This requirement of expanded decision-making context is what makes the
  difference between what is seen as merely good (to an individual)
  and what is seen as right or moral (to a group.)Morality is a
  function of a group, not of an individual. The difference entails
  **agreement**, thus decision-making context greater than a single
  agent, thus recognition of self in order to recognize the existence of
  the greater context including both self and other agency.

 So you're saying that if you act morally without recognizing the greater
 context then you are not acting morally (i.e. you are acting amorally --
 without morals -- as opposed to immorally -- against morals).
 
 Yes, a machine that has been programed to carry out acts which others
 have decided are moral, or a human who follows religious (or military)
 imperatives is not displaying moral agency.
 
 
 I would then argue that we humans *rarely* recognize this greater context --
 and then most frequently act upon this realization for the wrong reasons
 (i.e. fear of ostracism, punishment, etc.) instead of moral reasons
 because realistically most of us are hard-wired by evolution to feel in
 accordance with most of what is regarded as moral (with the exceptions often
 being psychopaths).
 
 Yes!  Our present-day moral agency is limited due to what we might
 lump under the term lack of awareness. Most of what is presently
 considered morality is actually only distilled patterns of
 cooperative behavior that worked in the environment of evolutionary
 adaptation, now encoded into our innate biological preferences as well
 as cultural artifacts such as the Ten Commandments.
 
 A more accurate understanding of morality or decision-making seen as
 right, and extensible beyond the EEA to our increasingly complex
 world might be something like the following:
 
 Decisions are seen as increasingly moral to the extent that they enact
 principles assessed as promoting an increasing context of increasingly
 coherent values over increasing scope of consequences.
 
 For the sake of brevity here I'll resist the temptation to forestall
 some anticipated objections.
 
 - Jef
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http

RE: [agi] Pure reason is a disease.

2007-06-05 Thread Derek Zahn
 
Mark Waser writes:
 
 BTW, with this definition of morality, I would argue that it is a very rare 
 human that makes moral decisions any appreciable percent of the time 
 
Just a gentle suggestion:  If you're planning to unveil a major AGI initiative 
next month, focus on that at the moment.  This stuff you have been arguing 
lately is quite peripheral to what you have in mind, except perhaps for the 
business model but in that area I see little compromise on more than subtle 
technical points.
 
As I have begun to re-attach myself to the issues of AGI I have become 
suspicious of the ability or wisdom of attaching important semantics to atomic 
tokens (as I suspect you are going to attempt to do, along with most 
approaches), but I'd dearly like to contribute to something I thought had a 
chance.
This stuff, though, belongs on comp.ai.philosophy (which is to say, it belongs 
unread).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
 Just a gentle suggestion:  If you're planning to unveil a major AGI 
 initiative next month, focus on that at the moment.

I think that morality (aka Friendliness) is directly on-topic for *any* AGI 
initiative; however, it's actually even more apropos for the approach that I'm 
taking.

 As I have begun to re-attach myself to the issues of AGI I have become 
 suspicious of the ability or wisdom of attaching important semantics to 
 atomic tokens (as I suspect you are going to attempt to do, along with most 
 approaches), but I'd dearly like to contribute to something I thought had a 
 chance.

Atomic tokens are quick and easy labels for what can be very convoluted and 
difficult concepts which normally end up varying in their details from person 
to person.  We cannot communicate efficiently and effectively without such 
labels but unless all parties have the exact same concept (to the smallest 
details) attached to the same label, we are miscommunicating to the exact 
degree that our concepts in all their glory aren't congruent.  A very important 
part of what I'm proposing is attempting to deal with the fact that no two 
humans agree *exactly* on the meaning of any but the simplest labels.  Does 
that allay your fears somewhat?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
 Decisions are seen as increasingly moral to the extent that they enact
 principles assessed as promoting an increasing context of increasingly
 coherent values over increasing scope of consequences.

Or another question . . . . if I'm analyzing an action based upon the criteria 
specified above but am actually taking the action that the criteria says is 
moral because I feel that it is in my best self-interest to always act morally 
-- am I still a moral agent?

Mark

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:



 Decisions are seen as increasingly moral to the extent that they enact
 principles assessed as promoting an increasing context of increasingly
 coherent values over increasing scope of consequences.

Or another question . . . . if I'm analyzing an action based upon the
criteria specified above but am actually taking the action that the criteria
says is moral because I feel that it is in my best self-interest to always
act morally -- am I still a moral agent?


Shirley you jest.

Out of respect for the gentle but slightly passive-aggressive Derek,
and others who see this as excluding lots of nuts and bolts AGI stuff,
I'll leave it here.

If you're serious, contact me offlist and I'll be happy to expand on
what it really means.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


RE: [agi] Pure reason is a disease.

2007-06-05 Thread Derek Zahn
Mark Waser writes:

 I think that morality (aka Friendliness) is directly on-topic for *any* AGI 
 initiative; however, it's actually even more apropos for the approach that 
 I'm taking.
 
 A very important part of what I'm proposing is attempting to deal with the 
 fact that no two humans agree *exactly* on the meaning of any but the 
 simplest labels.  Does that allay your fears somewhat?
 
I agree that refraining from devastating humanity is a good idea :-), luckily I 
think we have some time before it's an imminent risk.
 
As to my fears about your project, we can wait until July to see the details. 
 You've done a good job of piquing interest :)
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-04 Thread Jiri Jelinek

Hi Mark,


Your brain can be simulated on a large/fast enough von Neumann architecture.



From the behavioral perspective (which is good enough for AGI) - yes,

but that's not the whole story when it comes to human brain. In our
brains, information not only is and moves but also feels. From
my perspective, the idea of uploading human mind into (or fully
simulating in) a VN architecture system is like trying to create (not
just draw) a 3D object in a 2D space. You can find a way how to
represent it even in 1D, but you miss the real view - which, in this
analogy, would be the beauty (or awfulness) needed to justify actions.
It's meaningless to take action without feelings - you are practically
dead - there is just some mechanical device trying to make moves in
your way of thinking. But thinking is not our goal. Feeling is. The
goal is to not have goal(s) and safely feel the best forever.


prove that you aren't just living in a simulation.


Impossible


If you can't, then you must either concede that feeling pain is possible for a

simulated entity..

It is possible. There are just good reasons to believe that it takes
more than a bunch of semiconductor based slots storing 1s and 0s.

Regards,
Jiri

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-02 Thread Mark Waser

What component do you have that can't exist in

a von Neumann architecture?

Brain :)


Your brain can be simulated on a large/fast enough von Neumann architecture.



Agreed, your PC cannot feel pain.  Are you sure, however, that an entity

hosted/simulated on your PC doesn't/can't?

If the hardware doesn't support it, how could it?


   As I said before, prove that you aren't just living in a simulation.  If 
you can't, then you must either concede that feeling pain is possible for a 
simulated entity or that you don't feel pain. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-26 Thread Jiri Jelinek

Mark,


If Google came along and offered you $10 million for your AGI, would you

give it to them?

No, I would sell services.


How about the Russian mob for $1M and your life and the

lives of your family?

How about FBI? No? So maybe selling him a messed up version for $2M
and then hiring a skilled pro who would make sure he would *never*
bother AGI developers again? If you are smart enough to design AGI, you
are likely to figure out how to deal with such a guy. ;-)


Or, what if your advisor tells you that unless you upgrade him so that he

can take actions, it is highly probable that someone else will create a
system in the very near future that will be able to take actions and won't
have the protections that you've built into him.

I would just let the system explain what actions would it then take.


I suggest preventing potential harm by making the AGI's top-level

goal to be Friendly
(and unlike most, I actually have a reasonably implementable idea of what is
meant by that).

Tell us about it. :)


sufficiently sophisticated AGI will act as if it experiences pain


So could such AGI be then forced by torture to break rules it
otherwise would not want to break?  Can you give me an example of
something what will cause the pain? What do you think will the AGI
do when in extreme pain? BTW it's just a bad design from my
perspective.


I don't see your point unless you're arguing that there is something

special about using chemicals for global environment settings rather
than some other method (in which case I
would ask What is that something special and why is it special?).

2 points I was trying to make:
1) Sophisticated general intelligence system can work fine without the
ability to feel pain.
2) von Neumann architecture lacks components known to support the pain
sensation.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-26 Thread Mark Waser

If Google came along and offered you $10 million for your AGI, would you

give it to them?
No, I would sell services.
:-)  No.  That wouldn't be an option.  $10 million or nothing (and they'll 
go off and develop it themselves).



How about the Russian mob for $1M and your life and the

lives of your family?
How about FBI? No? So maybe selling him a messed up version for $2M
and then hiring a skilled pro who would make sure he would *never*
bother AGI developers again? If you are smart enough to design AGI, you
are likely to figure out how to deal with such a guy. ;-)
Nice fantasy world . . . . How are you going to do any of that stuff after 
they've already kidnapped you?  No one is smart enough to handle that 
without extensive pre-existing preparations -- and you're too busy with 
other things.



Or, what if your advisor tells you that unless you upgrade him so that he

can take actions, it is highly probable that someone else will create a
system in the very near future that will be able to take actions and won't
have the protections that you've built into him.
I would just let the system explain what actions would it then take.
And he would (truthfully) explain that using you as an interface to the 
world (and all the explanations that would entail) would slow him down 
enough that he couldn't prevent catastrophe.



Tell us about it. :)

July (as previously stated)



So could such AGI be then forced by torture to break rules it
otherwise would not want to break?  Can you give me an example of
something what will cause the pain? What do you think will the AGI
do when in extreme pain? BTW it's just a bad design from my
perspective.


Of course.  Killing 10 million people.  Put *much* shorter deadlines on 
figuring out it's responses/Kill a single person to avoid the killing of 
another ten million.  And I believe that your perspective is too way too 
limited.  To me, what you're saying is equivalent to the fact that an 
engine produces excess heat is just a bad design.



2 points I was trying to make:
1) Sophisticated general intelligence system can work fine without the
ability to feel pain.
2) von Neumann architecture lacks components known to support the pain
sensation.


Prove to me that 2) is true.  What component do you have that can't exist in 
a von Neumann architecture?  Hint:  Prove that you aren't just a simulation 
on a von Neumann architecture.


Further, prove that pain (or more preferably sensation in general) isn't an 
emergent property of sufficient complexity.  My argument is that you 
unavoidably get sensation before you get complex enough to be generally 
intelligent.


   Mark

- Original Message - 
From: Jiri Jelinek [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, May 26, 2007 4:20 AM
Subject: Re: [agi] Pure reason is a disease.



Mark,


If Google came along and offered you $10 million for your AGI, would you

give it to them?

No, I would sell services.


How about the Russian mob for $1M and your life and the

lives of your family?

How about FBI? No? So maybe selling him a messed up version for $2M
and then hiring a skilled pro who would make sure he would *never*
bother AGI developers again? If you are smart enough to design AGI, you
are likely to figure out how to deal with such a guy. ;-)


Or, what if your advisor tells you that unless you upgrade him so that he

can take actions, it is highly probable that someone else will create a
system in the very near future that will be able to take actions and won't
have the protections that you've built into him.

I would just let the system explain what actions would it then take.


I suggest preventing potential harm by making the AGI's top-level

goal to be Friendly
(and unlike most, I actually have a reasonably implementable idea of what 
is

meant by that).

Tell us about it. :)


sufficiently sophisticated AGI will act as if it experiences pain


So could such AGI be then forced by torture to break rules it
otherwise would not want to break?  Can you give me an example of
something what will cause the pain? What do you think will the AGI
do when in extreme pain? BTW it's just a bad design from my
perspective.


I don't see your point unless you're arguing that there is something

special about using chemicals for global environment settings rather
than some other method (in which case I
would ask What is that something special and why is it special?).

2 points I was trying to make:
1) Sophisticated general intelligence system can work fine without the
ability to feel pain.
2) von Neumann architecture lacks components known to support the pain
sensation.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id

Re: [agi] Pure reason is a disease.

2007-05-26 Thread Mark Waser

I think it is a serious mistake for anyone to say that the difference

between machines cannot in principle experience real feelings.

We are complex machines, so yes, machines can, but my PC cannot, even
though it can power AGI.


Agreed, your PC cannot feel pain.  Are you sure, however, that an entity 
hosted/simulated on your PC doesn't/can't?  Once again, prove that you/we 
aren't just simulations on a sufficiently large and fast PC.  (I know that I 
can't and many really smart people say they can't either). 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-25 Thread Mark Waser

You possibly already know this and are simplifying for the sake of
simplicity, but chemicals are not simply global environmental
settings.

Chemicals/hormones/peptides etc. are spatial concentration gradients
across the entire brain, which are much more difficult to emulate in
software then a singular concetration value. Add to this the fact that
some of these chemicals inhibit and promote others and you get
horrendously complex reaction diffusion systems.


:-)  Yes, I was simplifying for the sake of my argument (trying not to cloud 
the issue with facts :-)


BUT your reminder is *very* useful since it's one of my biggest 
(explainable) complaints with the IBM folk who believe that they're going to 
successfully simulate the (mouse) brain with just simple and (in Modha's own 
words) cartoonish models of neurons (and I wish the Decade of the Mind 
people would hurry up and post the videos because there were several talks 
worth recommending -- including Dr. Modha's). 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-24 Thread Jiri Jelinek

Mark,

I cannot hit everything now, so at least one part:


Are you *absolutely positive* that real pain and real
feelings aren't an emergent phenomenon of sufficiently complicated and
complex feedback loops?  Are you *really sure* that a sufficiently
sophisticated AGI won't experience pain?


Except some truths found in the world of math, I'm not *absolutely
positive* about anything ;-), but I don't see why it should, and when
running on computers we currently have, I don't see how it could..
Note that some people suffer from rare disorders that prevent them
from the sensation of pain (e.g. congenital insensitivity to pain).
Some of them suffer from slight mental retardation, but not all. Their
brains are pretty complex systems demonstrating general intelligence
without the pain sensation. In some of those cases, the pain is killed
by increased production of endorphins in the brain, and in other cases
the pain info doesn't even make it to the brain because of
malfunctioning nerve cells which are responsible for transmitting the
pain signals (caused by genetic mutations). Particular feelings (as we
know it) require certain sensors and chemistry. Sophisticated logical
structures (at least in our bodies) are not enough for actual
feelings. For example, to feel pleasure, you also need things like
serotonin, acetylcholine, noradrenaline, glutamate, enkephalins and
endorphins.  Worlds of real feelings and logic are loosely coupled.

Regards,
Jiri Jelinek

On 5/23/07, Mark Waser [EMAIL PROTECTED] wrote:

 AGIs (at least those that could run on current computers)
 cannot really get excited about anything. It's like when you represent
 the pain intensity with a number. No matter how high the number goes,
 it doesn't really hurt. Real feelings - that's the key difference
 between us and them and the reason why they cannot figure out on their
 own that they would rather do something else than what they were asked
 to do.

So what's the difference in your hardware that makes you have real pain and
real feelings?  Are you *absolutely positive* that real pain and real
feelings aren't an emergent phenomenon of sufficiently complicated and
complex feedback loops?  Are you *really sure* that a sufficiently
sophisticated AGI won't experience pain?

I think that I can guarantee (as in, I'd be willing to bet a pretty large
sum of money) that a sufficiently sophisticated AGI will act as if it
experiences pain . . . . and if it acts that way, maybe we should just
assume that it is true.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-24 Thread Eric Baum



Josh I think that people have this notion that because emotions are
Josh so unignorable and compelling subjectively, that they must be
Josh complex. In fact the body's contribution, in an information
Josh theoretic sense, is tiny -- I'm sure I way overestimate it with
Josh the 1%.

Emotions are also, IMO and also according to some existing literature,
essentially preprogrammed in the genome.

See wife with another man, run jealousy routine.

Hear unexpected loud noise, go into preprogrammed 7 point startle 
routine already visible in newborns.

etc.

Evolution builds you to make decisions. But you need guidance so the
decisions you make tend to actually favor its ends. You get
essentially a two  part computation, where your decision making
circuitry gets preprogrammed inputs about what it should maximize 
and what tenor it should take.
On matters close to their ends (of propagating), the genes take 
control to make sure you don't deviate from the program.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-24 Thread Mark Waser

Note that some people suffer from rare disorders that prevent them
from the sensation of pain (e.g. congenital insensitivity to pain).



the pain info doesn't even make it to the brain because of
malfunctioning nerve cells which are responsible for transmitting the
pain signals (caused by genetic mutations).


This is equivalent to their lacking the input (the register that says your 
current pain level is 17) not the ability to feel pain if the register was 
connected (and therefore says nothing about their brain or their 
intelligence).



In some of those cases, the pain is killed
by increased production of endorphins in the brain,


In these cases, the pain is reduced but still felt . . . . but again this is 
equivalent to being register driven -- the nerves say the pain level is 17, 
the endorphins alter the register down to 5.



Particular feelings (as we
know it) require certain sensors and chemistry.


I would agree that particular sensations require certain sensors but 
chemistry is an implementation detail that IMO could be replaced with 
something else.



Sophisticated logical
structures (at least in our bodies) are not enough for actual
feelings. For example, to feel pleasure, you also need things like
serotonin, acetylcholine, noradrenaline, glutamate, enkephalins and
endorphins.  Worlds of real feelings and logic are loosely coupled.


OK.  So our particular physical implementation of our mental computation 
uses chemicals for global environment settings and logic (a very detailed 
and localized operation) uses neurons (yet, nonetheless, is affected by the 
global environment settings/chemicals).  I don't see your point unless 
you're arguing that there is something special about using chemicals for 
global environment settings rather than some other method (in which case I 
would ask What is that something special and why is it special?).


   Mark 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-24 Thread Eric Baum


Jiri Note that some people suffer from rare
Jiri disorders that prevent them from the sensation of pain
Jiri (e.g. congenital insensitivity to pain). 

What that tells you is that the sensation you feel is genetically
programmed. Break the program, you break (or change) the sensation.
Run the intact program, you feel the sensation.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-24 Thread Joel Pitt

On 5/25/07, Mark Waser [EMAIL PROTECTED] wrote:

 Sophisticated logical
 structures (at least in our bodies) are not enough for actual
 feelings. For example, to feel pleasure, you also need things like
 serotonin, acetylcholine, noradrenaline, glutamate, enkephalins and
 endorphins.  Worlds of real feelings and logic are loosely coupled.

OK.  So our particular physical implementation of our mental computation
uses chemicals for global environment settings and logic (a very detailed
and localized operation) uses neurons (yet, nonetheless, is affected by the
global environment settings/chemicals).  I don't see your point unless
you're arguing that there is something special about using chemicals for
global environment settings rather than some other method (in which case I
would ask What is that something special and why is it special?).


You possibly already know this and are simplifying for the sake of
simplicity, but chemicals are not simply global environmental
settings.

Chemicals/hormones/peptides etc. are spatial concentration gradients
across the entire brain, which are much more difficult to emulate in
software then a singular concetration value. Add to this the fact that
some of these chemicals inhibit and promote others and you get
horrendously complex reaction diffusion systems.

--
-Joel

Unless you try to do something beyond what you have mastered, you
will never grow. -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Richard Loosemore

Mark Waser wrote:

AGIs (at least those that could run on current computers)
cannot really get excited about anything. It's like when you represent
the pain intensity with a number. No matter how high the number goes,
it doesn't really hurt. Real feelings - that's the key difference
between us and them and the reason why they cannot figure out on their
own that they would rather do something else than what they were asked
to do.


So what's the difference in your hardware that makes you have real pain 
and real feelings?  Are you *absolutely positive* that real pain and 
real feelings aren't an emergent phenomenon of sufficiently complicated 
and complex feedback loops?  Are you *really sure* that a sufficiently 
sophisticated AGI won't experience pain?


I think that I can guarantee (as in, I'd be willing to bet a pretty 
large sum of money) that a sufficiently sophisticated AGI will act as if 
it experiences pain . . . . and if it acts that way, maybe we should 
just assume that it is true.


Jiri,

I agree with Mark's comments here, but would add that I think we can do 
more than just take a hands-off Turing attitude to such things as pain: 
 I believe that we can understand why a system built in the right kind 
of way *must* experience feelings of exactly the sort we experience.


I won't give the whole argument here (I presented it at the 
Consciousness conference in Tucson last year, but have not yet had time 
to write it up as a full paper).


I think it is a serious mistake for anyone to say that the difference 
between machines cannot in principle experience real feelings.  Sure, if 
they are too simple they will not, but all of our discussions, on this 
list, are not about those kinds of too-simple systems.


Having said that:  there are some conventional approaches to AI that are 
so crippled that I don't think they will ever become AGI, let alone have 
feelings.  If you were criticizing those specifically, rather than just 
AGI in general, I'm on your side!  :-;



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Lukasz Kaiser

Hi,

On 5/23/07, Mark Waser [EMAIL PROTECTED] wrote:

- Original Message -
From: Jiri Jelinek [EMAIL PROTECTED]
 On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:
 - Original Message -
 From: Jiri Jelinek [EMAIL PROTECTED]
  On 5/16/07, Mark Waser [EMAIL PROTECTED] wrote:
  - Original Message -
  From: Jiri Jelinek [EMAIL PROTECTED]


Mark and Jiri, I beg you, could you PLEASE stop top-posting?
I guess it is just a second for you to cut it, or even better, to
change the settings of your mail program to cut it, and it takes
a second for every message you send for everyone who reads
it to scroll through it, not to mention looking inside for content
just in case it was not entirely top-posted. Please, cut it!

- lk

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Mark Waser

A meta-question here with some prefatory information . . . .

The reason why I top-post (and when I do so, I *never* put content inside) 
is because I frequently find it *really* convenient to have the entire text 
of the previous message or two (no more) immediately available for 
reference.


On the other hand, I, too, find top-posting annoying whenever I'm reading a 
list as a digest but feel that it is offset by it's usefulness.


That being said, I am more than willing to stop top-posting if even a 
sizeable minority find it frustrating (I've seen this meta-discussion on 
several other lists and seen it go about 50/50 with a very slight edge for 
allowing top-posting with a skew towards low-volume lists liking it and 
high-volume lists not).


   Mark 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Eric Baum

Richard Mark Waser wrote:
 AGIs (at least those that could run on current computers)
 cannot really get excited about anything. It's like when you
Richard represent
 the pain intensity with a number. No matter how high the number
Richard goes,
 it doesn't really hurt. Real feelings - that's the key difference
 between us and them and the reason why they cannot figure out on
Richard their
 own that they would rather do something else than what they were
Richard asked
 to do.
 So what's the difference in your hardware that makes you have real
 pain and real feelings?  Are you *absolutely positive* that real
 pain and real feelings aren't an emergent phenomenon of
 sufficiently complicated and complex feedback loops?  Are you
 *really sure* that a sufficiently sophisticated AGI won't
 experience pain?
 
 I think that I can guarantee (as in, I'd be willing to bet a pretty
 large sum of money) that a sufficiently sophisticated AGI will act
 as if it experiences pain . . . . and if it acts that way, maybe we
 should just assume that it is true.

Richard Jiri,

Richard I agree with Mark's comments here, but would add that I think
Richard we can do more than just take a hands-off Turing attitude to
Richard such things as pain: I believe that we can understand why a
Richard system built in the right kind of way *must* experience
Richard feelings of exactly the sort we experience.

Richard I won't give the whole argument here (I presented it at the
Richard Consciousness conference in Tucson last year, but have not
Richard yet had time to write it up as a full paper).

What is Thought? argues the same thing (Chapter 14). I'd be curious
to see if your argument is different.

Richard I think it is a serious mistake for anyone to say that the
Richard difference between machines cannot in principle experience
Richard real feelings.  Sure, if they are too simple they will not,
Richard but all of our discussions, on this list, are not about those
Richard kinds of too-simple systems.

Richard Having said that: there are some conventional approaches to
Richard AI that are so crippled that I don't think they will ever
Richard become AGI, let alone have feelings.  If you were criticizing
Richard those specifically, rather than just AGI in general, I'm on
Richard your side!  :-;


Richard Richard Loosemore

Richard - This list is sponsored by AGIRI:
Richard http://www.agiri.org/email To unsubscribe or change your
Richard options, please go to:
Richard http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Eric Baum

 AGIs (at least those that could run on current computers) cannot
 really get excited about anything. It's like when you represent the
 pain intensity with a number. No matter how high the number goes,
 it doesn't really hurt. Real feelings - that's the key difference
 between us and them and the reason why they cannot figure out on
 their own that they would rather do something else than what they
 were asked to do.

Mark So what's the difference in your hardware that makes you have
Mark real pain and real feelings?  Are you *absolutely positive* that
Mark real pain and real feelings aren't an emergent phenomenon of
Mark sufficiently complicated and complex feedback loops?  Are you
Mark *really sure* that a sufficiently sophisticated AGI won't
Mark experience pain?

Mark I think that I can guarantee (as in, I'd be willing to bet a
Mark pretty large sum of money) that a sufficiently sophisticated AGI
Mark will act as if it experiences pain . . . . and if it acts that
Mark way, maybe we should just assume that it is true.

If you accept the proposition (for which Turing gave compelling
arguments) that a computer with the right program could simulate the
workings of your brain in detail, then it follows that your feelings
are identifiable with some aspect or portion of the computation.

I claim that if feelings are identified with the decision making
computations of a top level module, (which might reasonably
be called a homunculus) everything is
concisely explained. What you are then *unaware* of is all the many
and varied computations done in subroutines that the decision
making module is isolated from by abstraction boundary (this
is by far most of the computation) as well as most internal computations
of the decision making module itself (which it will no more be
programmed to be able to report than my laptop can report its
internal transistor voltages). What you feel and can report and
the qualitative nature of your 
sensations is then determined by the code being run as it makes
decisions. I claim that the subjective nature of every feeling is
very naturally explained in this context. 
Pain, for example, is the weighing
of programmed-in negative reinforcement. (How could you possibly
modify the sensation of pain to make it any clearer it is 
negative reinforcement?) What is Thought? ch 14
goes through about 10 sensations that a philosopher had claimed
were not plausibly explainable by a computational model, and 
argues that each has exactly the nature you'd expect evolution 
to program in.
You then can't have a zombie that behaves the way you do but
doesn't have sensations, since to behave like you do it has to
make decisions, and it is in fact the decision making computation
that is identified with sensation. (Computations that are better
preprogrammed because they don't require decision, such as pulling
away from a hot stove or driving the usual route home for the
thousandth time, are dispatched to subroutines and are unconscious.) 

This picture is subject to empirical test, through psychophysics
(and also as we increasingly understand the genetic programming that
builds much of this code.)
A good example is Ramanchandran's amputee experiment. Amputees
frequently feel pain in their phantom (missing) limb. They can
feel themselves clenching their phantom hand so hard, that their
phantom finger nails gouge their phantom hands, causing intense real
pain. Ramanchandran predicted that this was caused by the mind sending
a signal to the phantom hand saying: relax, but getting no feedback
assuming that the hand had not relaxed, and inferring that pain should
be felt (including computing details of its nature). 
He predicted that if he provided a feedback telling the mind
that relaxation had occurred the pain would go away, which he then 
provided through a mirror device in which patients could place both
real and phantom limbs, relax both simultaneously, and get visual
feedback that the phantom limb had relaxed (in the mirror). Instantly
the pain vanished, confirming the prediction that the pain was
purely computational.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Eric Baum

Mike Eric Baum: What is Thought [claims that] feelings.are
Mike explainable by a computational model.

Mike Feelings/ emotions are generated by the brain's computations,
Mike certainly. But they are physical/ body events. Does your Turing
Mike machine have a body other than that of some kind of computer
Mike box? And does it want to dance when it hears emotionally
Mike stimulating music?

Mike And does your Turing Machine also find it hard to feel - get in
Mike touch with - feelings/ emotions? Will it like humans massively
Mike overconsume every substance in order to get rid of unpleasant
Mike emotions?

If its running the right code.

If you find that hard to understand, its because your understanding
mechanism has certain properties, and one of them is that it has
having trouble with this concept. I claim its not surprising either
that evolution programmed in an understanding mechanism like that,
but I suggest it is possible to overcome in the same way that
physicists were capable of coming to understand quantum mechanics.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Mike Tintner
P.S. Eric, I haven't forgotten your question to me,  will try to address it 
in time - the answer is complex. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Mike Tintner

Eric,

The point is simply that you can only fully simulate emotions with a body as 
well as a brain. And emotions while identified by the conscious brain are 
felt with the body


I don't find it at all hard to understand - I fully agree -  that emotions 
are generated as a result of computations in the brain. I agree with cog. 
sci. that they are highly functional in helping us achieve goals.


My underlying argument, though, is that  your (or any) computational model 
of emotions,  if it does not also include a body, will be fundamentally 
flawed both physically AND computationally.




Mike Eric Baum: What is Thought [claims that] feelings.are
Mike explainable by a computational model.

Mike Feelings/ emotions are generated by the brain's computations,
Mike certainly. But they are physical/ body events. Does your Turing
Mike machine have a body other than that of some kind of computer
Mike box? And does it want to dance when it hears emotionally
Mike stimulating music?

Mike And does your Turing Machine also find it hard to feel - get in
Mike touch with - feelings/ emotions? Will it like humans massively
Mike overconsume every substance in order to get rid of unpleasant
Mike emotions?

If its running the right code.

If you find that hard to understand, its because your understanding
mechanism has certain properties, and one of them is that it has
having trouble with this concept. I claim its not surprising either
that evolution programmed in an understanding mechanism like that,
but I suggest it is possible to overcome in the same way that
physicists were capable of coming to understand quantum mechanics.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.467 / Virus Database: 269.7.6/815 - Release Date: 22/05/2007 
15:49






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread J Storrs Hall, PhD
On Wednesday 23 May 2007 06:34:29 pm Mike Tintner wrote:
 My underlying argument, though, is that  your (or any) computational model 
 of emotions,  if it does not also include a body, will be fundamentally 
 flawed both physically AND computationally.

Does everyone here know what an ICE is in the EE sense? (In-Circuit 
Emulator -- it's a gadget that plugs into a circuit and simulates a given 
chip, but has all sorts of debugging readouts on the back end that allow the 
engineer to figure out why it's screwing up.)

Now pretend that there is a body and a brain and we have removed the brain and 
plugged in a BrainICE instead. There's this fat cable running from the body 
to the ICE (just as there is in electronic debugging) that carries all the 
signals that the brain would be getting from the body.

Most of the cable's bandwidth is external sensation (and indeed most of that 
is vision). Motor control is most of the outgoing bandwidth. There is some 
extra portion of the bandwidth that can be counted as internal affective 
signals. (These are very real -- the body takes part in quite a few feedback 
loops with such mechanisms as hormone release and its attendant physiological 
effects.) Let us call these internal feedback loop closure mechanisms the 
affect effect.

Now here is 

*
Hall's Conjecture:
The computational resources necessary to simulate the affect effect are less 
than 1% of that necessary to implement the computational mechanism of the 
brain.
*

I think that people have this notion that because emotions are so unignorable 
and compelling subjectively, that they must be complex. In fact the body's 
contribution, in an information theoretic sense, is tiny -- I'm sure I way 
overestimate it with the 1%.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-20 Thread Jiri Jelinek
, your solution *might* work if
you designed it perfectly.  In the real world, designing critical systems
with a single point of failure is sheer idiocy.

3. bad or insufficient knowledge
 Can't prevent it.. GIGO..

My point exactly.  You can't prevent it so you *must* deal with it --
CORRECTLY.  If your proposal stops with Can't prevent it.. GIGO.. then the
garbage out will kill us all.

4. search algorithms that break in unanticipated ways in unanticipated
places

 The fact is that it's nearly impossible to develop large bug-free
 system. And as Brian Kernighan put it: Debugging is twice as hard as
 writing the code in the first place. Therefore, if you write the code
 as cleverly as possible, you are, by definition, not smart enough to
 debug it.
 But again, you really want to fix the cause, not the symptoms.

Again, my point exactly.  You can't prevent it so you *must* deal with it --
CORRECTLY.  You can't always count on finding (much less fixing) the cause
before your single point of failure system kills us all.

Are you really sure you wish to rest the fate of the world on it?
 No :). AGI(s) suggest solutions  people decide what to do.

1.  People are stupid and will often decide to do things that will kill
large numbers of people.
2.  The AGI will, regardless of what you do, fairly shortly be able to take
actions on it's own.

 Limited entity in a messy world - I agree with that, but the AGI
 advantage is that it can dig through (and keep fixing) its data very
 systematically. We cannot really do that. Our experience is charged
 with feelings that work as indexes, optimizing the access to the info
 learned in similar moods = good for performance, but sometimes sort of
 forcing us to miss important links between concepts.

The fact that the AGI can keep digging through (and keep fixing) its data
very systematically doesn't solve the time constraint and deadline problems.
The good for performance but bad for completeness feature of emotions that
you point out is UNAVOIDABLE.  There will *always* be trade-offs between
timeliness and completeness (or, in the more common phrasing, speed and
control).

 I'm sure there will be attempts to hack powerful AGIs.. When someone
 really gets into the system, it doesn't matter if you implemented
 emotions or whatever.. The guy can do what he wants, but you can
 make the system very hard to hack.

And multiple layers of defense make it harder to hack.  Your arguments
conflict with each other.

Emotions/feelings *are* effectively a bunch of rules.
 I then would not call it emotions when talking AGI

That's *your* choice; however, emotions are a very powerful analogy and
you're losing a lot by not using that term.

But they are very simplistic, low-level rules that are given
 immediate sway over
 much higher levels of the system and they are generally not built upon
 in a logical fashion before doing so.

 Everything should be IMO done in logical fashion so that the AGI could
 always well explain solutions.

:-)  I wasn't clear.  When I said that they are generally not built upon in
a logical fashion before doing so, I meant simply that they are generally
not built upon not that they are built upon in a illogical fashion.  The
AGI will *always* well explain solutions -- even emotional ones (since it
will be in better touch with it's emotions than we are :-)

 I see people having more luck with logic than with emotion based
 decisions. We tend to see less when getting emotional.

I'll agree vehemently with the second phrase since it's just another
rephrasing of the time versus completeness trade-off.  The first statement I
completely disagree with.  Adapted people who are in tune with their
emotions tend to make far less mistakes than more logical people who are
not.  Yes, people who are not in tune with their emotions frequently allow
those emotions to make bad decisions for them -- but *that* is something
that isn't going to happen with a well-designed emotional AGI.

 More powerful problem solver - Sure.
 The ultimate decision maker - I would not vote for that.

The point is -- you're not going to get a vote.  It's going to happen
whether you like it or not.

-

Look at it this way.  Your logic says that if you can build this perfect
shining AGI on a hill -- that everything will be OK.  My emotions say that
there is far too much that can go awry if you depend upon *everything* that
you say you're depending upon *plus* everything that you don't realize
you're depending upon *plus* . . .

Mark


- Original Message -
From: Jiri Jelinek [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, May 16, 2007 2:18 AM
Subject: Re: [agi] Pure reason is a disease.


 Mark,

 In computer systems, searches are much cleaner so the backup search
 functionality typically doesn't make sense.
..I entirely disagree... searches are not simple enough that you
 can count on getting them right because of all of the following:
 1. non-optimally specified goals


 AGI should

Re: [agi] Pure reason is a disease.

2007-05-20 Thread Mark Waser

I wonder how vague are the rules used by major publishers to decide
what is OK to publish.


Generally, there are no rules -- it's normally just the best judgment of a 
single individual.



Can you get more specific about the layers? How do you detect
malevolent individuals? Note that the fact that a particular user is
highly interested in malevolent stuff doesn't mean he is bad guy.


Sure.  There's the logic layer and the emotion layer.  Even if the logic 
layer get convinced, the emotion layer is still there to say Whoa.  Hold on 
a minute.  Maybe I'd better run this past some other people . . . .


Note also, I'm not trying to detect a malevolent individual.  I'm trying to 
prevent facilitating an action that could be harmful.  I don't care about 
whether the individual is malevolent or stupid (though, in later stages, 
malevolence detection probably would be a good idea so as to possibly deny 
the user unsupervised access to the system).



Without feelings, it cannot prefer = won't do a thing on its own.


Nope.  Any powerful enough system is going to have programmed goals which it 
then will have to interpret and develop subgoals and a plan of action. 
While it may not have set the top-level goal(s), it certainly is operating 
on it's own.



Unless we mess up, our machines do what we want.
I don't think we necessarily have to mess up.


We don't have to necessarily mess up.  I can walk a high-wire if you give me 
two hand-rails.  But not putting the hand-rails in place would be suicide 
for me.



c) User-provided rules to follow


The crux of the matter.  Can you specify rules that won't conflict with each 
other and which cover every contingency?


If so, what is the difference between them and an unshakeable attraction or 
revulsion?


   Mark

- Original Message - 
From: Jiri Jelinek [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, May 20, 2007 4:14 AM
Subject: Re: [agi] Pure reason is a disease.



Hi Mark,


AGI(s) suggest solutions  people decide what to do.

1.  People are stupid and will often decide to do things that will kill

large numbers of people.

I wonder how vague are the rules used by major publishers to decide
what is OK to publish.


I'm proposing a layered defense strategy Force the malevolent

individual to navigate multiple defensive layers and you better the
chances of detecting and stopping him.

Can you get more specific about the layers? How do you detect
malevolent individuals? Note that the fact that a particular user is
highly interested in malevolent stuff doesn't mean he is bad guy.


2.  The AGI will, regardless of what you do,
fairly shortly be able to take actions on it's own.


Without feelings, it cannot prefer = won't do a thing on its own.


More powerful problem solver - Sure.
The ultimate decision maker - I would not vote for that.

The point is -- you're not going to get a vote.
It's going to happen whether you like it or not.


Unless we mess up, our machines do what we want.
I don't think we necessarily have to mess up.


The fact that the AGI can keep digging through (and keep fixing) its

data very systematically doesn't solve the time constraint and
deadline problems.

Sure, there will be limitations. But if an AGI gets

a) start scenario
b) target scenario
c) User-provided rules to follow
d) System-config based rules to follow (e.g don't use knowledge
marked [security_marking] when generation solutions for members of
'user_role_name' role)
e) deadline

then it can just show the first valid solution found, or say something
like Sorry, can't make it + a reason (e.g. insufficient
knowledge/time or thought broken by info access restriction)


And multiple layers of defense make it harder to hack.  Your arguments

conflict with each other.

When talking about hacking, I meant unauthorized access and/or
modifications of AGI's resources. Considering current technology,
there are many standard ways for multi-layer security. When it comes
to generating safe system responses to regular user-requests then
see above. Being busy with the knowledge representation issues, I did
not figure out the exact implementation of the security marking
algorithm yet. It might get tricky and I don't think I'll find
practical hints in emotions. To some extent it might be handled by
selected users.


Look at it this way.  Your logic says that if you can build this perfect

shining AGI on a hill -- that everything will be OK.  My emotions say that
there is far too much that can go awry if you depend upon *everything* 
that

you say you're depending upon *plus* everything that you don't realize
you're depending upon *plus* . . .

Playing with powerful tools always includes risks. More and more
powerful tools will be developed. If we cannot deal with it then we
don't deserve future. But I'm optimistic. Hopefully, AGI will get it
right when asked to help us to figure out how to make sure we deserve
it ;-)

Regards,
Jiri Jelinek

On 5/16/07, Mark Waser [EMAIL

Re: [agi] Pure reason is a disease.

2007-05-16 Thread Jiri Jelinek
 and generalize without any unexpected behavior AND the AGI always
correctly recognizes the situation . . . .

The AGI won't deliberately have goals that conflict yours (unlike humans)
but there are all sorts of ways that life can unexpectedly go awry.

Further, and very importantly to this debate -- Having emotions does *NOT*
make it any more likely that the AGI will not stick with your commands
(quite the contrary -- although anthropomorphism may make it *seem*
otherwise).

 You review solutions, accept it if you like it. If you don't then you
update rules (and/or modify KB in other ways) preventing unwanted and let
AGI to re-think it.

OK.  And what happens when you don't have time or the AI gets too smart for
you or someone else gets ahold of it and modifies it in an unsafe or even
malevolent way?  When you're talking about one of the biggest existential
threats to humankind -- safeguards are a pretty good idea (even if they are
expensive).

 we can control it + we review solutions - if not entirely then just
important aspects of it (like politicians working with various domain
experts).

I hate to do it but I should point you at the Singularity Institute and
their views of how easy and catastrophic the creation and loss of control
over an Unfriendly AI would be
(http://www.singinst.org/upload/CFAI.html).


 Can you give me an example showing how feelings implemented without
emotional investments prevent a particular [sub-]goal that cannot be as
effectively prevented by a bunch of rules?

Emotions/feelings *are* effectively a bunch of rules.  But they are very
simplistic, low-level rules that are given immediate sway over much higher
levels of the system and they are generally not built upon in a logical
fashion before doing so.  As such, they are safer in one sense because
they cannot be co-opted by bad logic -- and less safe because they are so
simplistic that they could be fooled by complexity.

Several good examples were in the article on the sources of human morality
-- Most human beings can talk themselves (logically) into believing that
killing a human is OK or even preferable in far more circumstances than they
can force their emotions to go along with it.  I think that this is a *HUGE*
indicator of how we should think when we are considering building something
as dangerous as an entity that will eventually be more powerful than us.

Mark



- Original Message -
From: Jiri Jelinek
To: agi@v2.listbox.com

Sent: Thursday, May 03, 2007 1:11 PM
Subject: Re: [agi] Pure reason is a disease.

Mark,

relying on the fact that you expect to be 100% successful initially and
therefore don't put as many back-up systems into place as possible is really
foolish and dangerous.

It's basically just a non-trivial search function. In human brain, searches
are dirty so back-up searches make sense. In computer systems, searches are
much cleaner so the backup search functionality typically doesn't make
sense. Besides that, maintaining many back-up systems is a pain. It's
easier to tweak single solution-search fn into perfection. For the backup,
I prefer external solution, like some sort of AGI chat protocol so
different AGI solutions (and/or instances of the same AGI) with unique KB
could argue about the best solution.

 See, you had a conflict in your mind . . . . but I don't think it needs
to be that way for AGI.

I strongly disagree.  An AGI is always going to be dealing with incomplete
and conflicting information.. expect a messy, ugly system

You need to distinguish between:
a) internal conflicts (that's what I was referring to)
b) internal vs external conflicts (limited/invalid knowledge issues)

For a) (at least), AGI can get much better than humans (early
detection/clarification requests, ..).

system that is not going to be 100% controllable but which needs to have a
100% GUARANTEE that it will not go outside certain limits. This is eminently
do-able I do believe -- but not by simply relying on logic to create a world
model that is good enough to prevent it.

You just give it rules and it will stick with it (= easier than controlling
humans). You review solutions, accept it if you like it. If you don't then
you update rules (and/or modify KB in other ways) preventing unwanted and
let AGI to re-think it.

Having backup systems (particularly ones that perform critical tasks) seems
like eminently *good* design to me.  I think that is actually the crux of
our debate. I believe that emotions are a necessary backup to prevent
catastrophe.  You believe (if I understand correctly -- and please correct
me if I'm wrong) that backup is not necessary

see above

and that having emotions is more likely to precipitate catastrophe.

yes

Unfriendly is this context merely means possessing a goal inimical to human
goals.

we can control it + we review solutions - if not entirely then just
important aspects of it (like politicians working with various domain
experts).

An AI without feelings can certainly have

Re: [agi] Pure reason is a disease.

2007-05-16 Thread Mark Waser
.  When I said that they are generally not built upon in 
a logical fashion before doing so, I meant simply that they are generally 
not built upon not that they are built upon in a illogical fashion.  The 
AGI will *always* well explain solutions -- even emotional ones (since it 
will be in better touch with it's emotions than we are :-)



I see people having more luck with logic than with emotion based
decisions. We tend to see less when getting emotional.


I'll agree vehemently with the second phrase since it's just another 
rephrasing of the time versus completeness trade-off.  The first statement I 
completely disagree with.  Adapted people who are in tune with their 
emotions tend to make far less mistakes than more logical people who are 
not.  Yes, people who are not in tune with their emotions frequently allow 
those emotions to make bad decisions for them -- but *that* is something 
that isn't going to happen with a well-designed emotional AGI.



More powerful problem solver - Sure.
The ultimate decision maker - I would not vote for that.


The point is -- you're not going to get a vote.  It's going to happen 
whether you like it or not.


-

Look at it this way.  Your logic says that if you can build this perfect 
shining AGI on a hill -- that everything will be OK.  My emotions say that 
there is far too much that can go awry if you depend upon *everything* that 
you say you're depending upon *plus* everything that you don't realize 
you're depending upon *plus* . . .


   Mark


- Original Message - 
From: Jiri Jelinek [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, May 16, 2007 2:18 AM
Subject: Re: [agi] Pure reason is a disease.



Mark,


In computer systems, searches are much cleaner so the backup search

functionality typically doesn't make sense.

..I entirely disagree... searches are not simple enough that you
can count on getting them right because of all of the following:
1. non-optimally specified goals



AGI should IMO focus on
a) figuring out how to reach given goals, instead of
b) trying to guess if users want something else than
what they actually asked for.

The b)

- could be specifically requested, but then it becomes a).

- could significantly impact performance

- (in order to work well) would require AGI to understand user's
preferences really really well, possibly even better than the user
himself. Going with some very general assumptions might not work well
because people prefer different things. E.g. some like the idea of
being converted to an extremely happy brain in a [safe] jar, others
think it's a madness. Some would exchange the standard love for a
button on their head which, if pressed, would give them all kinds of
love related feelings (possibly many times stronger than the best ones
they ever had, some wouldn't prefer such optimization.


(if not un-intentionally or intentionally specified malevolent ones)


Except for some top-level users, [sub-]goal restrictions of course
apply, but it's problematic. What is unsafe to show sometimes depends
on the level of details (saying make a bomb is not the same as
saying use this and that in such and such way to make a bomb).
Figuring out the safe level of detail is not always easy and another
problem is that smart users could break malevolent goals into separate
tasks so that [at least the first generation] AGIs wouldn't be able to
detect it even when following your emotion-related rules. The users
could be using multiple accounts so even if all those tasks are given
to a single instance of an AGI, it might not be able to notice the
master plan. So is it dangerous? Sure, it is.. But do we want to stop
making cars because car accidents keep killing many? Of course not.
AGI is potentially very powerful tool, but what we do with it is up to
us.


2. non-optimally stored and integrated knowledge


Then you want to fix the cause by optimizing  integrating instead of
solving symptoms by adding backup searches.


3. bad or insufficient knowledge


Can't prevent it.. GIGO..

4. search algorithms that break in unanticipated ways in unanticipated 
places


The fact is that it's nearly impossible to develop large bug-free
system. And as Brian Kernighan put it: Debugging is twice as hard as
writing the code in the first place. Therefore, if you write the code
as cleverly as possible, you are, by definition, not smart enough to
debug it.
But again, you really want to fix the cause, not the symptoms.


Are you really sure you wish to rest the fate of the world on it?


No :). AGI(s) suggest solutions  people decide what to do.


integrity holes and conflicts in any system.  Further, limitations on

computation power will cause even more since it simply won't be
possible to even finish a small percentage of the the clean-up that is
possible algorithmically.

The system may have many users who will be evaluating solutions they
requested. That will help with the clean-up + a lot can be IMO done to
support data

Re: [agi] Pure reason is a disease.

2007-05-03 Thread Mark Waser
 believing that you can stop all other sources of high level goals is . . . . 
 simply incorrect.
 IMO depends on design and on the nature  number of users involved.
:-)  Obviously.  But my point is that relying on the fact that you expect to be 
100% successful initially and therefore don't put as many back-up systems into 
place as possible is really foolish and dangerous.  I don't believe that simply 
removing emotions makes it any more likely to stop all other sources of high 
level goals.  Further, I believe that adding emotions *can* be effective in 
helping prevent unwanted high level goals.

 See, you had a conflict in your mind . . . . but I don't think it needs to be 
 that way for AGI. 

I strongly disagree.  An AGI is always going to be dealing with incomplete and 
conflicting information -- and, even if not, the computation required to learn 
(and remove all conflicting partial assumptions generated from learning) will 
take vastly more time than you're ever likely to get.  You need to expect a 
messy, ugly system that is not going to be 100% controllable but which needs to 
have a 100% GUARANTEE that it will not go outside certain limits.  This is 
eminently do-able I do believe -- but not by simply relying on logic to create 
a world model that is good enough to prevent it.

 Paul Ekman's list of emotions: anger, fear, sadness, happiness, disgust

So what is the emotion that would prevent you from murdering someone if you 
absolutely knew that you could get away with it?

human beings have two clear and distinct sources of morality -- both 
logical and emotional
 poor design from my perspective..
Why?  Having backup systems (particularly ones that perform critical tasks) 
seems like eminently *good* design to me.  I think that is actually the crux of 
our debate.  I believe that emotions are a necessary backup to prevent 
catastrophe.  You believe (if I understand correctly -- and please correct me 
if I'm wrong) that backup is not necessary and that having emotions is more 
likely to precipitate catastrophe.

I would strongly argue that an intelligence with well-designed feelings is 
far, far more likely to stay Friendly than an intelligence without feelings 
 AI without feelings (unlike its user) cannot really get unfriendly.
Friendly is a bad choice of terms since it normally denotes an emotion-linked 
state.  Unfriendly is this context merely means possessing a goal inimical to 
human goals.  An AI without feelings can certainly have goals inimical to human 
goals and therefore be unfriendly (just not be emotionally invested in it :-)

how giving a goal of avoid x is truly *different* from discomfort 
 It's the do vs NEED to do. 
 Discomfort requires an extra sensor supporting the ability to prefer on its 
 own.
So what is the mechanism that prioritizes sub-goals?  It clearly must 
discriminate between the candidates.  Doesn't that lead to a result that could 
be called a preference?

Mark

- Original Message - 
  From: Jiri Jelinek 
  To: agi@v2.listbox.com 
  Sent: Thursday, May 03, 2007 1:57 AM
  Subject: Re: [agi] Pure reason is a disease.


  Mark,

  logic, when it relies upon single chain reasoning is relatively fragile. And 
when it rests upon bad assumptions, it can be just a roadmap to disaster.

  It all improves with learning. In my design (not implemented yet), AGI learns 
from stories and (assuming it learned enough) can complete incomplete stories. 

  e.g:
  Story name: $tory
  [1] Mark has $0.
  [2] ..[to be generated by AGI]..
  [3] Mark has $1M.

  As the number of learned/solved stories grows, better/different solutions can 
be generated.

  I believe that it is very possible (nay, very probable) for an Artificial 
Program Solver to end up with a goal that was not intended by you. 

  For emotion/feeling enabled AGI - possibly.
  For feeling-free AGI - only if it's buggy.

  Distinguish:
  a) given goals (e.g the [3]) and 
  b) generated sub-goals.

  In my system, there is an admin feature that can restrict both for 
lower-level users. Besides that, to control b), I go with subject-level and 
story-level user-controlled profiles (inheritance supported). For example, if 
Mark is linked to a Life lover profile that includes the Never Kill rule, 
the sub-goal queries just exclude the Kill action. Rule breaking would just 
cause invalid solutions nobody is interested in. I'm simplifying a bit, but, 
bottom line - both a)  b) can be controlled/restricted. 

  believing that you can stop all other sources of high level goals is . . . . 
simply incorrect.

  IMO depends on design and on the nature  number of users involved.

  Now, look at how I reacted to your initial e-mail.  My logic said Cool!  
Let's go implement this.  My intuition/emotions said Wait a minute.  There's 
something wonky here.  Even if I can't put my finger on it, maybe we'd better 
hold up until we can investigate this further.  Now -- which way would you 
like your Jupiter brain

Re: [agi] Pure reason is a disease.

2007-05-02 Thread Mark Waser
Hi Jiri,

OK, I pondered it for a while and the answer is -- failure modes.

Your logic is correct.  If I were willing take all of your assumptions as 
always true, then I would agree with you.  However, logic, when it relies upon 
single chain reasoning is relatively fragile.  And when it rests upon bad 
assumptions, it can be just a roadmap to disaster.

I believe that it is very possible (nay, very probable) for an Artificial 
Program Solver to end up with a goal that was not intended by you.  This can 
happen in any number of ways from incorrect reasoning in an imperfect world to 
robots rights activists deliberately programming pro-robot goals into them.  
Your statement Allowing other sources of high level goals = potentially asking 
for conflicts. is undoubtedly true but believing that you can stop all other 
sources of high level goals is . . . . simply incorrect.

Now, look at how I reacted to your initial e-mail.  My logic said Cool!  
Let's go implement this.  My intuition/emotions said Wait a minute.  There's 
something wonky here.  Even if I can't put my finger on it, maybe we'd better 
hold up until we can investigate this further.  Now -- which way would you 
like your Jupiter brain to react?

Richard Loosemoore has suggested on this list that Friendliness could also 
be implemented as a large number of loose constraints.  I view emotions as sort 
of operating this way and, in part, serving this purpose.  Further, recent 
brain research makes it quite clear that human beings have two clear and 
distinct sources of morality -- both logical and emotional 
(http://www.slate.com/id/2162998/pagenum/all/#page_start).  This is, in part, 
what I was thinking of when I listed b) provide pre-programmed constraints 
(for when logical reasoning doesn't have enough information) as one of the 
reasons why emotion was required.

I would strongly argue that an intelligence with well-designed feelings is 
far, far more likely to stay Friendly than an intelligence without feelings -- 
and I would argue that there is substantial evidence for this as well in our 
perception of and stories about emotionless people.

Mark

P.S.  Great discussion.  Thank you.
  - Original Message - 
  From: Jiri Jelinek 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 6:21 PM
  Subject: Re: [agi] Pure reason is a disease.


  Mark,

  I understand your point but have an emotional/ethical problem with it. I'll 
have to ponder that for a while.

  Try to view our AI as an extension of our intelligence rather than 
purely-its-own-kind. 


   For humans - yes, for our artificial problem solvers - emotion is a 
disease.

  What if the emotion is solely there to enforce our goals?
  Or maybe better == Not violate our constraints = comfortable, violate our 
constraints = feel discomfort/sick/pain.

  Intelligence is meaningless without discomfort. Unless your PC gets some sort 
of feel card, it cannot really prefer, cannot set goal(s), and cannot have 
hard feelings about working extremely hard for you. You can a) spend time 
figuring out how to build the card, build it, plug it in, and (with potential 
risks) tune it to make it friendly enough so it will actually come up with 
goals that are compatible enough with your goals *OR* b) you can simply tell 
your feeling-free AI what problems you want it to work on. Your choice.. I 
hope we are eventually not gonna end up asking the b) solutions how to clean 
up a great mess caused by the a) solutions. 

  Best,
  Jiri Jelinek


  On 5/1/07, Mark Waser [EMAIL PROTECTED] wrote:
 emotions.. to a) provide goals.. b) provide pre-programmed constraints, 
and c) enforce urgency.
 Our AI = our tool = should work for us = will get high level goals (+ 
urgency info and constraints) from us. Allowing other sources of high level 
goals = potentially asking for conflicts.  For sub-goals, AI can go with 
reasoning.

Hmmm.  I understand your point but have an emotional/ethical problem with 
it.  I'll have to ponder that for a while.

 For humans - yes, for our artificial problem solvers - emotion is a 
disease.

What if the emotion is solely there to enforce our goals?  Fulfill our 
goals = be happy, fail at our goals = be *very* sad.  Or maybe better == Not 
violate our constraints = comfortable, violate our constraints = feel 
discomfort/sick/pain.


  - Original Message - 
  From: Jiri Jelinek 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 2:29 PM 
  Subject: Re: [agi] Pure reason is a disease.


  emotions.. to a) provide goals.. b) provide pre-programmed constraints, 
and c) enforce urgency.

  Our AI = our tool = should work for us = will get high level goals (+ 
urgency info and constraints) from us. Allowing other sources of high level 
goals = potentially asking for conflicts. For sub-goals, AI can go with 
reasoning. 

  Pure reason is a disease

  For humans - yes, for our

Re: [agi] Pure reason is a disease.

2007-05-02 Thread Mark Waser
Hi again,

A few additional random comments . . . . :-)

 Intelligence is meaningless without discomfort.

I would rephrase this as (or subsume this under) intelligence is 
meaningless without goals -- because discomfort is simply something that sets 
up a goal of avoid me.  

But then, there is the question of how giving a goal of avoid x is truly 
*different* from discomfort (other than the fact that discomfort is normally 
envisioned as always spreading out to have a global effect -- even when not 
appropriate -- while goals are generally envisioned to have only logical 
effects -- which is, of course, a very dangerous assumption).
  - Original Message - 
  From: Jiri Jelinek 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 6:21 PM
  Subject: Re: [agi] Pure reason is a disease.


  Mark,

  I understand your point but have an emotional/ethical problem with it. I'll 
have to ponder that for a while.

  Try to view our AI as an extension of our intelligence rather than 
purely-its-own-kind. 


   For humans - yes, for our artificial problem solvers - emotion is a 
disease.

  What if the emotion is solely there to enforce our goals?
  Or maybe better == Not violate our constraints = comfortable, violate our 
constraints = feel discomfort/sick/pain.

  Intelligence is meaningless without discomfort. Unless your PC gets some sort 
of feel card, it cannot really prefer, cannot set goal(s), and cannot have 
hard feelings about working extremely hard for you. You can a) spend time 
figuring out how to build the card, build it, plug it in, and (with potential 
risks) tune it to make it friendly enough so it will actually come up with 
goals that are compatible enough with your goals *OR* b) you can simply tell 
your feeling-free AI what problems you want it to work on. Your choice.. I 
hope we are eventually not gonna end up asking the b) solutions how to clean 
up a great mess caused by the a) solutions. 

  Best,
  Jiri Jelinek


  On 5/1/07, Mark Waser [EMAIL PROTECTED] wrote:
 emotions.. to a) provide goals.. b) provide pre-programmed constraints, 
and c) enforce urgency.
 Our AI = our tool = should work for us = will get high level goals (+ 
urgency info and constraints) from us. Allowing other sources of high level 
goals = potentially asking for conflicts.  For sub-goals, AI can go with 
reasoning.

Hmmm.  I understand your point but have an emotional/ethical problem with 
it.  I'll have to ponder that for a while.

 For humans - yes, for our artificial problem solvers - emotion is a 
disease.

What if the emotion is solely there to enforce our goals?  Fulfill our 
goals = be happy, fail at our goals = be *very* sad.  Or maybe better == Not 
violate our constraints = comfortable, violate our constraints = feel 
discomfort/sick/pain.


  - Original Message - 
  From: Jiri Jelinek 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 2:29 PM 
  Subject: Re: [agi] Pure reason is a disease.


  emotions.. to a) provide goals.. b) provide pre-programmed constraints, 
and c) enforce urgency.

  Our AI = our tool = should work for us = will get high level goals (+ 
urgency info and constraints) from us. Allowing other sources of high level 
goals = potentially asking for conflicts. For sub-goals, AI can go with 
reasoning. 

  Pure reason is a disease

  For humans - yes, for our artificial problem solvers - emotion is a 
disease.

  Jiri Jelinek


  On 5/1/07, Mark Waser  [EMAIL PROTECTED] wrote: 
 My point, in that essay, is that the nature of human emotions is 
rooted in the human brain architecture, 

I'll agree that human emotions are rooted in human brain 
architecture but there is also the question -- is there something analogous to 
emotion which is generally necessary for *effective* intelligence?  My answer 
is a qualified but definite yes since emotion clearly serves a number of 
purposes that apparently aren't otherwise served (in our brains) by our pure 
logical reasoning mechanisms (although, potentially, there may be something 
else that serves those purposes equally well).  In particular, emotions seem 
necessary (in humans) to a) provide goals, b) provide pre-programmed 
constraints (for when logical reasoning doesn't have enough information), and 
c) enforce urgency.

Without looking at these things that emotions provide, I'm not sure 
that you can create an *effective* general intelligence (since these roles need 
to be filled by *something*).

 Because of the difference mentioned in the prior paragraph, the 
rigid distinction between emotion and reason that exists in the human brain 
will not exist in a well-design AI.

Which is exactly why I was arguing that emotions and reason (or 
feeling and thinking) were a spectrum rather than a dichotomy.


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi

Re: [agi] Pure reason is a disease.

2007-05-02 Thread Eric Baum

 My point, in that essay, is that the nature of human emotions is rooted in 
 the human brain architecture, 
Mark I'll agree that human emotions are rooted in human brain
Mark architecture but there is also the question -- is there
Mark something analogous to emotion which is generally necessary for
Mark *effective* intelligence?  My answer is a qualified but definite
Mark yes since emotion clearly serves a number of purposes that
Mark apparently aren't otherwise served (in our brains) by our pure
Mark logical reasoning mechanisms (although, potentially, there may
Mark be something else that serves those purposes equally well).  In
Mark particular, emotions seem necessary (in humans) to a) provide
Mark goals, b) provide pre-programmed constraints (for when logical
Mark reasoning doesn't have enough information), and c) enforce
Mark urgency.

My view is that emotions are systems programmed in by the genome to
cause the computational machinery to pursue ends of interest to
evolution, namely those relevant to leaving grandchildren.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Pure reason is a disease.

2007-05-02 Thread Mark Waser

My view is that emotions are systems programmed in by the genome to
cause the computational machinery to pursue ends of interest to
evolution, namely those relevant to leaving grandchildren.


I would concur and rephrase it as follows:  Human emotions are hard-coded 
goals that were implemented/selected through the force of evolution --  
and it's hard to argue with long-term evolution.


- Original Message - 
From: Eric Baum [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, May 02, 2007 11:04 AM
Subject: Re: [agi] Pure reason is a disease.




My point, in that essay, is that the nature of human emotions is rooted 
in the human brain architecture,

Mark I'll agree that human emotions are rooted in human brain
Mark architecture but there is also the question -- is there
Mark something analogous to emotion which is generally necessary for
Mark *effective* intelligence?  My answer is a qualified but definite
Mark yes since emotion clearly serves a number of purposes that
Mark apparently aren't otherwise served (in our brains) by our pure
Mark logical reasoning mechanisms (although, potentially, there may
Mark be something else that serves those purposes equally well).  In
Mark particular, emotions seem necessary (in humans) to a) provide
Mark goals, b) provide pre-programmed constraints (for when logical
Mark reasoning doesn't have enough information), and c) enforce
Mark urgency.

My view is that emotions are systems programmed in by the genome to
cause the computational machinery to pursue ends of interest to
evolution, namely those relevant to leaving grandchildren.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Pure reason is a disease.

2007-05-02 Thread Jiri Jelinek
 and, in part, serving this purpose.  Further,
recent brain research makes it quite clear that human beings have two clear
and distinct sources of morality -- both logical and emotional (
http://www.slate.com/id/2162998/pagenum/all/#page_start).  This is, in
part, what I was thinking of when I listed b) provide pre-programmed
constraints (for when logical reasoning doesn't have enough information) as
one of the reasons why emotion was required.

I would strongly argue that an intelligence with well-designed
feelings is far, far more likely to stay Friendly than an intelligence
without feelings -- and I would argue that there is substantial evidence for
this as well in our perception of and stories about emotionless people.

Mark

P.S.  Great discussion.  Thank you.

- Original Message -
*From:* Jiri Jelinek [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Tuesday, May 01, 2007 6:21 PM
*Subject:* Re: [agi] Pure reason is a disease.

Mark,

I understand your point but have an emotional/ethical problem with it. I'll
have to ponder that for a while.

Try to view our AI as an extension of our intelligence rather than
purely-its-own-kind.

 For humans - yes, for our artificial problem solvers - emotion is a
disease.
What if the emotion is solely there to enforce our goals?
Or maybe better == Not violate our constraints = comfortable, violate
our constraints = feel discomfort/sick/pain.

Intelligence is meaningless without discomfort. Unless your PC gets some
sort of feel card, it cannot really prefer, cannot set goal(s), and cannot
have hard feelings about working extremely hard for you. You can a) spend
time figuring out how to build the card, build it, plug it in, and (with
potential risks) tune it to make it friendly enough so it will actually come
up with goals that are compatible enough with your goals *OR* b) you can
simply tell your feeling-free AI what problems you want it to work on.
Your choice.. I hope we are eventually not gonna end up asking the b)
solutions how to clean up a great mess caused by the a) solutions.

Best,
Jiri Jelinek

On 5/1/07, Mark Waser [EMAIL PROTECTED] wrote:

   emotions.. to a) provide goals.. b) provide pre-programmed
 constraints, and c) enforce urgency.
  Our AI = our tool = should work for us = will get high level goals (+
 urgency info and constraints) from us. Allowing other sources of high level
 goals = potentially asking for conflicts.  For sub-goals, AI can go with
 reasoning.

 Hmmm.  I understand your point but have an emotional/ethical problem
 with it.  I'll have to ponder that for a while.

  For humans - yes, for our artificial problem solvers - emotion is a
 disease.
 What if the emotion is solely there to enforce our goals?  Fulfill our
 goals = be happy, fail at our goals = be *very* sad.  Or maybe better ==
 Not violate our constraints = comfortable, violate our constraints = feel
 discomfort/sick/pain.

  - Original Message -
 *From:* Jiri Jelinek [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
  *Sent:* Tuesday, May 01, 2007 2:29 PM
 *Subject:* Re: [agi] Pure reason is a disease.

 emotions.. to a) provide goals.. b) provide pre-programmed constraints,
 and c) enforce urgency.

 Our AI = our tool = should work for us = will get high level goals (+
 urgency info and constraints) from us. Allowing other sources of high level
 goals = potentially asking for conflicts. For sub-goals, AI can go with
 reasoning.

 Pure reason is a disease

 For humans - yes, for our artificial problem solvers - emotion is a
 disease.

 Jiri Jelinek

  On 5/1/07, Mark Waser  [EMAIL PROTECTED] wrote:

My point, in that essay, is that the nature of human emotions is
  rooted in the human brain architecture,
 
  I'll agree that human emotions are rooted in human brain
  architecture but there is also the question -- is there something analogous
  to emotion which is generally necessary for *effective* intelligence?  My
  answer is a qualified but definite yes since emotion clearly serves a number
  of purposes that apparently aren't otherwise served (in our brains) by our
  pure logical reasoning mechanisms (although, potentially, there may be
  something else that serves those purposes equally well).  In particular,
  emotions seem necessary (in humans) to a) provide goals, b) provide
  pre-programmed constraints (for when logical reasoning doesn't have enough
  information), and c) enforce urgency.
 
  Without looking at these things that emotions provide, I'm not
  sure that you can create an *effective* general intelligence (since these
  roles need to be filled by *something*).
 
   Because of the difference mentioned in the prior paragraph, the
  rigid distinction between emotion and reason that exists in the human brain
  will not exist in a well-design AI.
 
  Which is exactly why I was arguing that emotions and reason (or
  feeling and thinking) were a spectrum rather than a dichotomy.
 
   - Original Message -
  *From:* Benjamin

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Benjamin Goertzel

Well, this tells you something interesting about the human cognitive
architecture, but not too much about intelligence in general...

I think the dichotomy btw feeling and thinking is a consequence of the
limited reflective capabilities of the human brain...  I wrote about this in
The Hidden Pattern, and an earlier brief essay on the topic is here:

http://www.goertzel.org/dynapsyc/2004/Emotions.htm

-- Ben G

On 5/1/07, Mark Waser [EMAIL PROTECTED] wrote:


 From the Boston Globe (
http://www.boston.com/news/education/higher/articles/2007/04/29/hearts__minds/?page=full
)

Antonio Damasio, a neuroscientist at USC, has played a pivotal role in
challenging the old assumptions and establishing emotions as an important
scientific subject. When Damasio first published his results in the early
1990s, most cognitive scientists assumed that emotions interfered with
rational thought. A person without any emotions should be a better thinker,
since their cortical computer could process information without any
distractions.

But Damasio sought out patients who had suffered brain injuries that
prevented them from perceiving their own feelings, and put this idea to the
test. The lives of these patients quickly fell apart, he found, because they
could not make effective decisions. Some made terrible investments and ended
up bankrupt; most just spent hours deliberating over irrelevant details,
such as where to eat lunch. These results suggest that proper thinking
requires feeling. Pure reason is a disease.
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Mark Waser
 Well, this tells you something interesting about the human cognitive 
 architecture, but not too much about intelligence in general...

How do you know that it doesn't tell you much about intelligence in general?  
That was an incredibly dismissive statement.  Can you justify it?

 I think the dichotomy btw feeling and thinking is a consequence of the 
 limited reflective capabilities of the human brain...  

I don't believe that there is a true dichotomy between thinking and feeling.  I 
think that it is a spectrum that, in the case of humans, is weighted towards 
the ends (and I could give reasons why I believe it has happened this way) but 
which, in a ideal world/optimized entity, would be continuous. 

- Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 11:05 AM
  Subject: Re: [agi] Pure reason is a disease.



  Well, this tells you something interesting about the human cognitive 
architecture, but not too much about intelligence in general...

  I think the dichotomy btw feeling and thinking is a consequence of the 
limited reflective capabilities of the human brain...  I wrote about this in 
The Hidden Pattern, and an earlier brief essay on the topic is here: 

  http://www.goertzel.org/dynapsyc/2004/Emotions.htm

  -- Ben G


  On 5/1/07, Mark Waser [EMAIL PROTECTED] wrote:
From the Boston Globe ( 
http://www.boston.com/news/education/higher/articles/2007/04/29/hearts__minds/?page=full)

Antonio Damasio, a neuroscientist at USC, has played a pivotal role in 
challenging the old assumptions and establishing emotions as an important 
scientific subject. When Damasio first published his results in the early 
1990s, most cognitive scientists assumed that emotions interfered with rational 
thought. A person without any emotions should be a better thinker, since their 
cortical computer could process information without any distractions.

But Damasio sought out patients who had suffered brain injuries that 
prevented them from perceiving their own feelings, and put this idea to the 
test. The lives of these patients quickly fell apart, he found, because they 
could not make effective decisions. Some made terrible investments and ended up 
bankrupt; most just spent hours deliberating over irrelevant details, such as 
where to eat lunch. These results suggest that proper thinking requires 
feeling. Pure reason is a disease.



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/?; 


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Benjamin Goertzel

On 5/1/07, Mark Waser [EMAIL PROTECTED] wrote:


  Well, this tells you something interesting about the human cognitive
architecture, but not too much about intelligence in general...

How do you know that it doesn't tell you much about intelligence in
general?  That was an incredibly dismissive statement.  Can you justify it?




Well I tried to in the essay that I pointed to in my response.

My point, in that essay, is that the nature of human emotions is rooted in
the human brain architecture, according to which our systemic physiological
responses to cognitive phenomena (emotions) are rooted in primitive parts
of the brain that we don't have much conscious introspection into.  So, we
actually can't reason about the intermediate conclusions that go into our
emotional reactions very easily, because the conscious, reasoning parts of
our brains don't have the ability to look into the intermediate results
stored and manipulated within the more primitive emotionally reacting
parts of the brain.  So our deliberative consciousness has choice of either

-- accepting not-very-thoroughly-analyzable outputs from the emotional parts
of the brain

or

-- rejecting them

and doesn't have the choice to focus deliberative attention on the
intermediate steps used by the emotional brain to arrive at its conclusions.

Of course, through years of practice one can learn to bring more and more of
the emotional brain's operations into the scope of conscious deliberation,
but one can never do this completely due to the structure of the human
brain.

On the other hand, an AI need not have the same restrictions.  An AI should
be able to introspect into the intermediary conclusions and manipulations
used to arrive at its feeling responses.  Yes there are restrictions on
the amount of introspection possible, imposed by computational resource
limitations; but this is different than the blatant and severe architectural
restrictions imposed by the design of the human brain.

Because of the difference mentioned in the prior paragraph, the rigid
distinction between emotion and reason that exists in the human brain will
not exist in a well-design AI.

Sorry for not giving references regarding my analysis of the human
cognitive/neural system -- I have read them but don't have the reference
list at hand. Some (but not a thorough list) are given in the article I
referenced before.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Mark Waser
 My point, in that essay, is that the nature of human emotions is rooted in 
 the human brain architecture, 

I'll agree that human emotions are rooted in human brain architecture but 
there is also the question -- is there something analogous to emotion which is 
generally necessary for *effective* intelligence?  My answer is a qualified but 
definite yes since emotion clearly serves a number of purposes that apparently 
aren't otherwise served (in our brains) by our pure logical reasoning 
mechanisms (although, potentially, there may be something else that serves 
those purposes equally well).  In particular, emotions seem necessary (in 
humans) to a) provide goals, b) provide pre-programmed constraints (for when 
logical reasoning doesn't have enough information), and c) enforce urgency.

Without looking at these things that emotions provide, I'm not sure that 
you can create an *effective* general intelligence (since these roles need to 
be filled by *something*).

 Because of the difference mentioned in the prior paragraph, the rigid 
 distinction between emotion and reason that exists in the human brain will 
 not exist in a well-design AI.

Which is exactly why I was arguing that emotions and reason (or feeling and 
thinking) were a spectrum rather than a dichotomy.


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 1:05 PM
  Subject: Re: [agi] Pure reason is a disease.





  On 5/1/07, Mark Waser [EMAIL PROTECTED] wrote:
 Well, this tells you something interesting about the human cognitive 
architecture, but not too much about intelligence in general...

How do you know that it doesn't tell you much about intelligence in 
general?  That was an incredibly dismissive statement.  Can you justify it?


  Well I tried to in the essay that I pointed to in my response.

  My point, in that essay, is that the nature of human emotions is rooted in 
the human brain architecture, according to which our systemic physiological 
responses to cognitive phenomena (emotions) are rooted in primitive parts of 
the brain that we don't have much conscious introspection into.  So, we 
actually can't reason about the intermediate conclusions that go into our 
emotional reactions very easily, because the conscious, reasoning parts of 
our brains don't have the ability to look into the intermediate results stored 
and manipulated within the more primitive emotionally reacting parts of the 
brain.  So our deliberative consciousness has choice of either 

  -- accepting not-very-thoroughly-analyzable outputs from the emotional parts 
of the brain

  or

  -- rejecting them

  and doesn't have the choice to focus deliberative attention on the 
intermediate steps used by the emotional brain to arrive at its conclusions. 

  Of course, through years of practice one can learn to bring more and more of 
the emotional brain's operations into the scope of conscious deliberation, but 
one can never do this completely due to the structure of the human brain. 

  On the other hand, an AI need not have the same restrictions.  An AI should 
be able to introspect into the intermediary conclusions and manipulations used 
to arrive at its feeling responses.  Yes there are restrictions on the amount 
of introspection possible, imposed by computational resource limitations; but 
this is different than the blatant and severe architectural restrictions 
imposed by the design of the human brain. 

  Because of the difference mentioned in the prior paragraph, the rigid 
distinction between emotion and reason that exists in the human brain will not 
exist in a well-design AI.

  Sorry for not giving references regarding my analysis of the human 
cognitive/neural system -- I have read them but don't have the reference list 
at hand. Some (but not a thorough list) are given in the article I referenced 
before. 

  -- Ben G

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

  1   2   >