Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

Hank Conn wrote:
Yes, now the point being that if you have an AGI and you aren't in a 
sufficiently fast RSI loop, there is a good chance that if someone else 
were to launch an AGI with a faster RSI loop, your AGI would lose 
control to the other AGI where the goals of the other AGI differed from 
yours.
 
What I'm saying is that the outcome of the Singularity is going to be 
exactly the target goal state of the AGI with the strongest RSI curve.
 
The further the actual target goal state of that particular AI is away 
from the actual target goal state of humanity, the worse.
 
The goal of ... humanity... is that the AGI implemented that will have 
the strongest RSI curve also will be such that its actual target goal 
state is exactly congruent to the actual target goal state of humanity.
 
This is assuming AGI becomes capable of RSI before any human does. I 
think that's a reasonable assumption (this is the AGI list after all).


I agree with you, as far as you take these various points, although with 
some refinements.


Taking them in reverse order:

1)  There is no doubt in my mind that machine RSI will come long before 
human RSI.


2)  The goal of humanity is to build an AGI with goals (in the most 
general sense of goals) that matches its own.  That is as it should 
be, and I think there are techniques that could lead to that.  I also 
believe that those techniques will lead to AGI quicker than other 
techniques, which is a very good thing.


3)  The way that the RSI curves play out is not clear at this point, 
but my thoughts are that because of the nature of exponential curves 
(flattish for a long time, then the knee, then off to the sky) we will 
*not* have an arms race situation with competing AGI projects.  An arms 
race can only really happen if the projects stay on closely matched, 
fairly shallow curves:  people need to be neck and neck to have a 
situation in which nobody quite gets the upper hand and everyone 
competes.  That is fundamentally at odds with the exponential shape of 
the RSI curve.


What does that mean in practice?  It means that when the first system 
gets to really fast part of the curve, it might (for example) go from 
human level to 10x human level in a couple of months, then to 100x in a 
month, then 1000x in a week regardless of the exact details of these 
numbers, you can see that such a sudden arrival at superintelligence 
would most likley *not* occur at the same moment as someone else's project.


Then, the first system would quietly move to change any other projects 
so that their motivations were not a threat.  It wouldn't take them out, 
it would just ensure they were safe.


End of worries.

The only thing to worry about is that the first system have sympathetic 
motivations.  I think ensuring that should be our responsibility.  I 
think, also, that the first design will use the kind of diffuse 
motivational system that I talked about before, and for that reason it 
will most likely be similiar in design to ours, and not be violent or 
aggressive.


I actually have stronger beliefs than that, but they are hardly to 
articulate - basically, that a smart enough system will naturally and 
inevitably *tend* toward sympathy for life.  But I am not relying on 
that extra idea for the above arguments.


Does that make sense?


Richard Loosemore








-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

James Ratcliff wrote:
You could start a smaller AI with a simple hardcoded desire or reward 
mechanism to learn new things, or to increase the size of its knowledge.


 That would be a simple way to programmaticaly insert it.  That along 
with a seed AI, must be put in there in the beginning.


Remember we are not just throwing it out there with no goals or anything 
in the beginning, or it would learn nothing, and DO nothing atall.


Later this piece may need to be directly modifiable by the code to 
decrease or increase its desire to explore or learn new things, 
depending on its other goals.


James


It's difficult to get into all the details (this is a big subject), but 
you do have to remember that what you have done is to say *what* needs 
to be done (no doubt in anybody's mind that it needs a desire to learn!) 
but that the problem under discussion is the difficulty of figuring out 
*how* to do that.


That's where my arguments come in:  I was claiming that the idea of 
motivating an AGI has not been properly thought through by many people, 
who just assume that the system has a stack of goals (top level goal, 
then subgoals that, if acheived in sequence or in parallel, would cause 
top level goal to succeed, then a breakdown of those subgoals into 
sub-subgoals, and so on for maybe hundreds of levels  you probably 
get the idea).  My claim is that this design is too naive.  And that 
minor variations on this design won't necessarily improve it.


The devil, in other words, is in the details.


Richard Loosemore.










*/Philip Goetz [EMAIL PROTECTED]/* wrote:

On 11/19/06, Richard Loosemore wrote:

  The goal-stack AI might very well turn out simply not to be a
workable
  design at all! I really do mean that: it won't become intelligent
  enough to be a threat. Specifically, we may find that the kind of
  system that drives itself using only a goal stack never makes it
up to
  full human level intelligence because it simply cannot do the kind of
  general, broad-spectrum learning that a Motivational System AI
would do.
 
  Why? Many reasons, but one is that the system could never learn
  autonomously from a low level of knowledge *because* it is using
goals
  that are articulated using the system's own knowledge base. Put
simply,
  when the system is in its child phase it cannot have the goal
acquire
  new knowledge because it cannot understand the meaning of the words
  acquire or new or knowledge! It isn't due to learn those words
  until it becomes more mature (develops more mature concepts), so
how can
  it put acquire new knowledge on its goal stack and then unpack that
  goal into subgoals, etc?

This is an excellent observation that I hadn't heard before -
thanks, Richard!

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php



Everyone is raving about the all-new Yahoo! Mail beta. 
http://us.rd.yahoo.com/evt=45083/*http://advision.webevents.yahoo.com/mailbeta


This list is sponsored by AGIRI: http://www.agiri.org/email To 
unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Samantha Atkins


On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:


Recursive Self Inmprovement?

The answer is yes, but with some qualifications.

In general RSI would be useful to the system IF it were done in such  
a way as to preserve its existing motivational priorities.




How could the system anticipate whether on not significant RSI would  
lead it to question or modify its current motivational priorities?   
Are you suggesting that the system can somehow simulate an improved  
version of itself in sufficient detail to know this?  It seems quite  
unlikely.



That means:  the system would *not* choose to do any RSI if the RSI  
could not be done in such a way as to preserve its current  
motivational priorities:  to do so would be to risk subverting its  
own most important desires.  (Note carefully that the system itself  
would put this constraint on its own development, it would not have  
anything to do with us controlling it).




If the improvements were an improvement in capabilities and such  
improvement led to changes in its priorities then how would those  
improvements be undesirable due to showing current motivational  
priorities as being in some way lacking?  Why is protecting current  
beliefs or motivational priorities more important than becoming  
presumably more capable and more capable of understanding the reality  
the system is immersed in?



There is a bit of a problem with the term RSI here:  to answer  
your question fully we might have to get more specific about what  
that would entail.


Finally:  the usefulness of RSI would not necessarily be indefinite.  
The system could well get to a situation where further RSI was not  
particularly consistent with its goals.  It could live without it.




Then are its goal  more important to it than reality?

- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Samantha Atkins


On Nov 30, 2006, at 10:15 PM, Hank Conn wrote:


Yes, now the point being that if you have an AGI and you aren't in a  
sufficiently fast RSI loop, there is a good chance that if someone  
else were to launch an AGI with a faster RSI loop, your AGI would  
lose control to the other AGI where the goals of the other AGI  
differed from yours.




Are you sure that control would be a high priority of such systems?



What I'm saying is that the outcome of the Singularity is going to  
be exactly the target goal state of the AGI with the strongest RSI  
curve.


The further the actual target goal state of that particular AI is  
away from the actual target goal state of humanity, the worse.




What on earth is the actual target goal state of humanity?   AFAIK  
there is no such thing.  For that matter I doubt very much there is or  
can be an unchanging target goal state for any real AGI.




The goal of ... humanity... is that the AGI implemented that will  
have the strongest RSI curve also will be such that its actual  
target goal state is exactly congruent to the actual target goal  
state of humanity.




This seems rather circular and ill-defined.

- samantha


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Hank Conn

On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:


--- Hank Conn [EMAIL PROTECTED] wrote:
 The further the actual target goal state of that particular AI is away
from
 the actual target goal state of humanity, the worse.

 The goal of ... humanity... is that the AGI implemented that will have
the
 strongest RSI curve also will be such that its actual target goal state
is
 exactly congruent to the actual target goal state of humanity.

This was discussed on the Singularity list.  Even if we get the
motivational
system and goals right, things can still go badly.  Are the following
things
good?

- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.

You might think so, and program an AGI with these goals.  Suppose the AGI
figures out that by scanning your brain and copying the information into a
computer and making many redundant backups, that you become immortal.
Furthermore, once your consciousness becomes a computation in silicon,
your
universe can be simulated to be anything you want it to be.

The goals of humanity, like all other species, was determined by
evolution.
It is to propagate the species.



That's not the goal of humanity. That's the goal of the evolution of
humanity, which has been defunct for a while.


 This goal is met by a genetically programmed

individual motivation toward reproduction and a fear of death, at least
until
you are past the age of reproduction and you no longer serve a purpose.
Animals without these goals don't pass on their DNA.

A property of motivational systems is that cannot be altered.  You cannot
turn
off your desire to eat or your fear of pain.  You cannot decide you will
start
liking what you don't like, or vice versa.  You cannot because if you
could,
you would not pass on your DNA.



You are confusing this abstract idea of an optimization target with the
actual motivation system. You can change your motivation system all you
want, but you woulnd't (intentionally) change the fundamental specification
of the optimization target which is maintained by the motivation system as a
whole.


Once your brain is in software, what is to stop you from telling the AGI

(that
you built) to reprogram your motivational system that you built so you are
happy with what you have?



Uh... go for it.


 To some extent you can do this.  When rats can

electrically stimulate their nucleus accumbens by pressing a lever, they
do so
nonstop in preference to food and water until they die.

I suppose the alternative is to not scan brains, but then you still have
death, disease and suffering.  I'm sorry it is not a happy picture either
way.



Or you have no death, disease, or suffering, but not wireheading.


-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Philip Goetz

On 11/30/06, Mark Waser [EMAIL PROTECTED] wrote:

With many SVD systems, however, the representation is more vector-like
and *not* conducive to easy translation to human terms.  I have two answers
to these cases.  Answer 1 is that it is still easy for a human to look at
the closest matches to a particular word pair and figure out what they have
in common.


I developed an intrusion-detection system for detecting brand new
attacks on computer systems.  It takes TCP connections, and produces
100-500 statistics on each connection.  It takes thousands of
connections, and runs these statistics thru PCA to come up with 5
dimensions.  Then it clusters each connection, and comes up with 1-3
clusters per port that have a lot of connections and are declared to
be normal traffic.  Those connections that lie far from any of those
clusters are identified as possible intrusions.

The system worked much better than I expected it to, or than it had a
right to.  I went back and, by hand, tried to figure out how it was
classifying attacks.  In most cases, my conclusion was that there was
*no information available* to tell whether a connection was an attack,
because the only information to tell that a connection was an attack
was in the TCP packet contents, while my system looked only at packet
headers.  And yet, the system succeeded in placing about 50% of all
attacks in the top 1% of suspicious connections.  To this day, I don't
know how it did it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Hank Conn


This seems rather circular and ill-defined.

- samantha



Yeah I don't really know what I'm talking about at all.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Matt Mahoney

--- Philip Goetz [EMAIL PROTECTED] wrote:

 On 11/30/06, James Ratcliff [EMAIL PROTECTED] wrote:
  One good one:
  Consciousness is a quality of the mind generally regarded to comprise
  qualities such as subjectivity, self-awareness, sentience, sapience, and
 the
  ability to perceive the relationship between oneself and one's
 environment.
  (Block 2004).
 
  Compressed: Consciousness = intelligence + autonomy
 
 I don't think that definition says anything about intelligence or
 autonomy.  All it is is a lot of words that are synonyms for
 consciousness, none of which really mean anything.

I think if you insist on an operational definition of consciousness you will
be confronted with a disturbing lack of evidence that it even exists.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-01 Thread Philip Goetz

On 12/1/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:

I don't think so. The singulatarians tend to have this mental model of a
superintelligence that is essentially an analogy of the difference between an
animal and a human. My model is different. I think there's a level of
universality, like a Turing machine for computation. The huge difference
between us and animals is that we're universal and they're not, like the
difference between an 8080 and an abacus. superhuman intelligence will be
faster but not fundamentally different (in a sense), like the difference
between an 8080 and an Opteron.


I've often heard this claim, but what is the evidence that a human
brain is a universal turing machine?  People say that the fact that
humans can implement a turing machine, by following the instructions
stating how one works, proves that our minds our turing-complete.
BUT, if you reject Searle's Chinese-room argument, you must believe
that the consciousness that exists in the Chinese room is not inside
the human in the room, but in the combination (human + rules + data).
You must then ALSO believe that the Turing-complete consciousness that
is emulating a Turing machine, as a human follows the rules of a
Turing machine, resides not in the human, but in the complete system.
Thus, I don't think my ability to follow rules written on paper to
implement a Turing machine proves that the operations powering my
consciousness are Turing-complete.

It will be awfully embarassing if we build up the philosophical basis
on which our machine descendants justify our extermination when they
find that we're not UTMs...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Matt Mahoney

--- Hank Conn [EMAIL PROTECTED] wrote:

 On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
  The goals of humanity, like all other species, was determined by
  evolution.
  It is to propagate the species.
 
 
 That's not the goal of humanity. That's the goal of the evolution of
 humanity, which has been defunct for a while.

We have slowed evolution through medical advances, birth control and genetic
engineering, but I don't think we have stopped it completely yet.

 You are confusing this abstract idea of an optimization target with the
 actual motivation system. You can change your motivation system all you
 want, but you woulnd't (intentionally) change the fundamental specification
 of the optimization target which is maintained by the motivation system as a
 whole.

I guess we are arguing terminology.  I mean that the part of the brain which
generates the reward/punishment signal for operant conditioning is not
trainable.  It is programmed only through evolution.

   To some extent you can do this.  When rats can
  electrically stimulate their nucleus accumbens by pressing a lever, they
  do so
  nonstop in preference to food and water until they die.
 
  I suppose the alternative is to not scan brains, but then you still have
  death, disease and suffering.  I'm sorry it is not a happy picture either
  way.
 
 
 Or you have no death, disease, or suffering, but not wireheading.

How do you propose to reduce the human mortality rate from 100%?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-01 Thread J. Storrs Hall, PhD.
On Friday 01 December 2006 20:06, Philip Goetz wrote:

 Thus, I don't think my ability to follow rules written on paper to
 implement a Turing machine proves that the operations powering my
 consciousness are Turing-complete.

Actually, I think it does prove it, since your simulation of a Turing machine 
would consist of conscious operations. On the other hand, I agree with the 
spirit of your argument, (as I understand it), that our ability to simulate 
Turing machines on paper doesn't *prove* that we are generally universal 
machines at the level that we do most of the things that we do.

Even so, I think that we are, just barely. I hope so, anyway, or the AIs will 
put us in zoos and rightly so. 

--Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

Samantha Atkins wrote:


On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:


Recursive Self Inmprovement?

The answer is yes, but with some qualifications.

In general RSI would be useful to the system IF it were done in such a 
way as to preserve its existing motivational priorities.




How could the system anticipate whether on not significant RSI would 
lead it to question or modify its current motivational priorities?  Are 
you suggesting that the system can somehow simulate an improved version 
of itself in sufficient detail to know this?  It seems quite unlikely.


Well, I'm certainly not suggesting the latter.

It's a lot easier than you suppose.  The system would be built in two 
parts:  the motivational system, which would not change substantially 
during RSI, and the thinking part (for want of a better term), which 
is where you do all the improvement.


The idea of questioning or modifying its current motivational 
priorities is extremely problematic, so be careful how quickly you 
deploy it as if it meant something coherent.  What would it mean for ths 
system to modify it in such a way as to contradict the current state? 
That gets very close to a contradiction in terms.


It is not quite a contradiction, but certainly this would be impossible: 
 deciding to make a modification that clearly was going to leave it 
wanting something that, if it wanted that thing today, would contradict 
its current priorities.  Do you see why?  The motivational mechanism IS 
what the system wants, it is not what the system is considering wanting.




That means:  the system would *not* choose to do any RSI if the RSI 
could not be done in such a way as to preserve its current 
motivational priorities:  to do so would be to risk subverting its own 
most important desires.  (Note carefully that the system itself would 
put this constraint on its own development, it would not have anything 
to do with us controlling it).




If the improvements were an improvement in capabilities and such 
improvement led to changes in its priorities then how would those 
improvements be undesirable due to showing current motivational 
priorities as being in some way lacking?  Why is protecting current 
beliefs or motivational priorities more important than becoming 
presumably more capable and more capable of understanding the reality 
the system is immersed in?


The system is not protecting current beliefs, it is believing its 
current beliefs.  Becoming more capable of understanding the reality 
it is immersed in?  You have implicitly put a motivational priority in 
your system when you suggest that that is important to it ... does that 
rank higher than its empathy with the human race?


You see where I am going:  there is nothing god-given about the desire 
to understand reality in a better way.  That is just one more 
candidate for a motivational priority.





There is a bit of a problem with the term RSI here:  to answer your 
question fully we might have to get more specific about what that 
would entail.


Finally:  the usefulness of RSI would not necessarily be indefinite. 
The system could well get to a situation where further RSI was not 
particularly consistent with its goals.  It could live without it.




Then are its goal  more important to it than reality?

- samantha


Now you have become too abstract for me to answer, unless you are 
repeating the previous point.




Richard Loosemore.















-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

Matt Mahoney wrote:

I guess we are arguing terminology.  I mean that the part of the brain which
generates the reward/punishment signal for operant conditioning is not
trainable.  It is programmed only through evolution.


There is no such thing.  This is the kind of psychology that died out at 
least thirty years ago (with the exception of a few diehards in North 
Wales and Cambridge).




Richard Loosemore


[With apologies to Fergus, Nick and Ian, who may someday come across 
this message and start flaming me].


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

Matt Mahoney wrote:

--- Hank Conn [EMAIL PROTECTED] wrote:

The further the actual target goal state of that particular AI is away from
the actual target goal state of humanity, the worse.

The goal of ... humanity... is that the AGI implemented that will have the
strongest RSI curve also will be such that its actual target goal state is
exactly congruent to the actual target goal state of humanity.


This was discussed on the Singularity list.  Even if we get the motivational
system and goals right, things can still go badly.  Are the following things
good?

- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.

You might think so, and program an AGI with these goals.  Suppose the AGI
figures out that by scanning your brain and copying the information into a
computer and making many redundant backups, that you become immortal. 
Furthermore, once your consciousness becomes a computation in silicon, your

universe can be simulated to be anything you want it to be.


See my previous lengthy post on the subject of motivational systems vs 
goal stack systems.


The questions you asked above are predicated on a goal stack approach.

You are repeating the same mistakes that I already dealt with.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Kashif Shah

A little late on the draw here - I am a new member to the list and was
checking out the archives.  I had an insight into this debate over
understanding.

James Ratcliff wrote:

Understanding is a dum-dum word, it must be specifically defined as a
concept
or not used.  Understanding art is a Subjective question.  Everyone has
their
own 'interpretations' of what that means, either brush stokes, or style, or
color, or period, or content, or inner meaning.
But you CANT measure understanding of an object internally like that. There
MUST be an external measure of understanding.

My insight was this:  to ask 'do you understand x?' is too simple for the
subjective realm.  One must qualify with a phrase such as (in the context of
art) 'do you understand x in relation to y' or 'do you understand x as
representing y' or 'do you understand x as a possible meaning for y', etc.
By externally specifying the y, one can gain an objective 'picture' of the
internal subjective state of a person or an AI.  Of course this makes things
pretty complicated when one must analyze all possible y's, however, this
could even become a job for an AI, couldn't it?  If one knows the (or a) set
of possible interpretations (y's), one can begin to inquire as to the
understanding of x within an intelligence.



I would appreciate your feedback.

Thanks for your time,

Kashif Shah

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303