Re: [agi] AGI interests

2007-04-10 Thread Hank Conn

as a person: nihilism  the human condition. crime, drugs, debauchery.
self-destructive and life-endangering behaviour; rejection of social
norms. the world as I know it is a rather petty, woeful place and I
pretty much think modern city-dwelling life is a stenchy wet mouthful
of arse - not to say that living and dying in depravity and pain like
every one of my ancestors wasn't a whole lot worse. I'm far from
finding much in the Modern|West that is particularly engaging, but
luckily enough also think the Old|East was even more pathetic and that
naturalist hippies should be shot for their banal bovinity. I get
somewhat of a kick out of the fact that I might be risking the chance
to live forever by being such a societal refusenik.

amen brother. ^_^


On 3/28/07, kevin. osborne [EMAIL PROTECTED] wrote:


 Everyone on this list is quite different.
 What about the rest of you, what are your interests?

as a programmer: skilling up in cognitive systems in a fairly gradual
way so I'm ready and able to contribute when human-level (though not
necessarily -like) reasoning becomes a solved problem in the
mathematics|theory domain and needs competent programmers (which I'm
very far from being at this point, even after 10 years in the field)

as a fan of AGI: watching the smart guys (like Novamente) do the real
work of laying out the problem domain in theory and positing solutions
that make the leap between sound logic and running code. I'm not as
happy with all the blowhard action from others who are seemingly
incompetent in regards to making the leaps between

undertanding_cogniton-implementable_theory_of_thought-code-real_AGI_results
but am aware that the more people who are trying the better and as
someone with -zero- theories am aware that I'm a mere critic so -try-
to keep my scepticism to myself.

as a techie: scepticism. I think the 'small code' and 'small hardware'
people are kidding themselves. The CS theory|code we have today is
pretty much universally a complete bucket of sh!t and the hardware 
networking (while better) is still kinder toys compared to where it
could be. We are just -so- damn far away from say being able to build
hardware/software into things like ubiquitous (i.e. motes everywhere)
nanotech. Thinking that a semi-trivial set of code loops will somehow
become meta-cognitive is ridiculous and a tcpip socket does not a
synapse make.

as a singulatarian: big fan; I think it's inevitable, and that things
are definitely starting to snowball - see
http://del.icio.us/kevin/futurism. Can't say I'm buying into any
'when' predictions quite yet though.

as a person: nihilism  the human condition. crime, drugs, debauchery.
self-destructive and life-endangering behaviour; rejection of social
norms. the world as I know it is a rather petty, woeful place and I
pretty much think modern city-dwelling life is a stenchy wet mouthful
of arse - not to say that living and dying in depravity and pain like
every one of my ancestors wasn't a whole lot worse. I'm far from
finding much in the Modern|West that is particularly engaging, but
luckily enough also think the Old|East was even more pathetic and that
naturalist hippies should be shot for their banal bovinity. I get
somewhat of a kick out of the fact that I might be risking the chance
to live forever by being such a societal refusenik.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The Singularity

2006-12-05 Thread Hank Conn

Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?

It has been my experience that one's expectations on the future of
AI/Singularity is directly dependent upon one's understanding/design of AGI
and intelligence in general.

On 12/5/06, Ben Goertzel [EMAIL PROTECTED] wrote:


John,

On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:

 I don't believe that the singularity is near, or that it will even
occur.  I
 am working very hard at developing real artificial general intelligence,
but
 from what I know, it will not come quickly.  It will be slow and
 incremental.  The idea that very soon we can create a system that can
 understand its own code and start programming itself is ludicrous.

First, since my birthday is just a few days off, I'll permit myself an
obnoxious reply:
grin
Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?
/grin

Seriously: I agree that progress toward AGI will be incremental, but
the question is how long each increment will take.  My bet is that
progress will seem slow for a while -- and then, all of a sudden,
it'll seem shockingly fast.  Not necessarily hard takeoff in 5
minutes fast, but at least Wow, this system is getting a lot smarter
every single week -- I've lost my urge to go on vacation fast ...
leading up to the phase of Suddenly the hard takeoff is a topic for
discussion **with the AI system itself** ...

According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
formal stage of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the human child like intuition of the AGI system will
be able to synergize with its computer like ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-04 Thread Hank Conn

Brian thanks for your response and Dr. Hall thanks for your post as well. I
will get around to responding to this as soon as time permits. I am
interested in what Michael Anissimov or Michael Wilson has to say.

On 12/4/06, Brian Atkins [EMAIL PROTECTED] wrote:


I think this is an interesting, important, and very incomplete subject
area, so
thanks for posting this. Some thoughts below.

J. Storrs Hall, PhD. wrote:

 Runaway recursive self-improvement


 Moore's Law, underneath, is driven by humans.  Replace human
 intelligence with superhuman intelligence, and the speed of computer
 improvement will change as well.  Thinking Moore's Law will remain
 constant even after AIs are introduced to design new chips is like
 saying that the growth of tool complexity will remain constant even
 after Homo sapiens displaces older homonid species.  Not so.  We are
 playing with fundamentally different stuff.

 I don't think so. The singulatarians tend to have this mental model of a
 superintelligence that is essentially an analogy of the difference
between an
 animal and a human. My model is different. I think there's a level of
 universality, like a Turing machine for computation. The huge difference
 between us and animals is that we're universal and they're not, like the
 difference between an 8080 and an abacus. superhuman intelligence will
be
 faster but not fundamentally different (in a sense), like the difference
 between an 8080 and an Opteron.

 That said, certainly Moore's law will speed up given fast AI. But having
one
 human-equivalent AI is not going to make any more different than having
one
 more engineer. Having a thousand-times-human AI won't get you more than
 having 1000 engineers. Only when you can substantially augment the total
 brainpower working on the problem will you begin to see significant
effects.

Putting aside the speed differential which you accept, but dismiss as
important
for RSI, isn't there a bigger issue you're skipping regarding the other
differences between an Opteron-level PC and an 8080-era box? For example,
there
are large differences in the addressable memory amounts. This might for
instance
mean whereas a very good example of a human can study and become a true
expert
in perhaps a handful of fields, a SI may be able to be a true expert in
many
more fields simultaneously and to a more exhaustive degree than a human.
Will
this lead to the SI making more breakthroughs per given amount of runtime?
Does
it multiply with the speed differential?

Also, what is really the difference between an Einstein/Feynman brain, and
someone with an 80 IQ? It doesn't appear that E/F's brains run simply
slightly
faster, or likewise that they simply know more facts. There's something
else
isn't there? Call it a slightly better architecture or maybe only certain
brain
parts are a bit better, but this would seem to be a 4th issue to consider
besides the previously raised points of speed, memory capacity, and
universality. I'm sure we can come up with other things too.

(Btw, the preferred spelling is singularitarian; it gets most google
hits by
far from what I can tell. Also btw the term arguably now refers more
specifically to someone who wants to work on accelerating the singularity,
so
you probably can't group in here every single person who simply believes a
singularity is possible or coming.)


 If modest differences in size, brain structure, and
 self-reprogrammability make the difference between chimps and humans
 capable of advanced technological activity, then fundamental
 differences in these qualities between humans and AIs will lead to a
 much larger gulf, right away.

 Actually Neanderthals had brains bigger than ours by 10%, and we blew
them off
 the face of the earth. They had virtually no innovation in 100,000
years; we
 went from paleolithic to nanotech in 30,000. I'll bet we were universal
and
 they weren't.

 Virtually every advantage in Elie's list is wrong. The key is to
realize
 that that we do all these things, just more slowly than we imagine
machines
 being able to do them:

 Our source code is not reprogrammable.

 We are extremely programmable. The vast majority of skills we use
day-to-day
 are learned. If you watched me tie a sheepshank knot a few times, you
would
 most likely then be able to tie one yourself.

 Note by the way that having to recompile new knowledge is a big
security
 advantage for the human architecture, as compared with downloading
blackbox
 code and running it sight unseen...

This is missing the point entirely isn't it? Learning skills is using your
existing physical brain design, but not modifying its overall or even
localized
architecture or modifying what makes it work. When source code is
mentioned,
we're talking a lower level down.

Can you cause your brain to temporarily shut down your visual cortex and
other
associated visual parts, reallocate them to expanding your working memory
by
four times its current size in order to help you juggle 

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Hank Conn

On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:



--- Hank Conn [EMAIL PROTECTED] wrote:

 On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
  The goals of humanity, like all other species, was determined by
  evolution.
  It is to propagate the species.


 That's not the goal of humanity. That's the goal of the evolution of
 humanity, which has been defunct for a while.

We have slowed evolution through medical advances, birth control and
genetic
engineering, but I don't think we have stopped it completely yet.

 You are confusing this abstract idea of an optimization target with the
 actual motivation system. You can change your motivation system all you
 want, but you woulnd't (intentionally) change the fundamental
specification
 of the optimization target which is maintained by the motivation system
as a
 whole.

I guess we are arguing terminology.  I mean that the part of the brain
which
generates the reward/punishment signal for operant conditioning is not
trainable.  It is programmed only through evolution.

   To some extent you can do this.  When rats can
  electrically stimulate their nucleus accumbens by pressing a lever,
they
  do so
  nonstop in preference to food and water until they die.
 
  I suppose the alternative is to not scan brains, but then you still
have
  death, disease and suffering.  I'm sorry it is not a happy picture
either
  way.


 Or you have no death, disease, or suffering, but not wireheading.

How do you propose to reduce the human mortality rate from 100%?



Why do you ask?

-hank


-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Hank Conn

On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:


--- Hank Conn [EMAIL PROTECTED] wrote:
 The further the actual target goal state of that particular AI is away
from
 the actual target goal state of humanity, the worse.

 The goal of ... humanity... is that the AGI implemented that will have
the
 strongest RSI curve also will be such that its actual target goal state
is
 exactly congruent to the actual target goal state of humanity.

This was discussed on the Singularity list.  Even if we get the
motivational
system and goals right, things can still go badly.  Are the following
things
good?

- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.

You might think so, and program an AGI with these goals.  Suppose the AGI
figures out that by scanning your brain and copying the information into a
computer and making many redundant backups, that you become immortal.
Furthermore, once your consciousness becomes a computation in silicon,
your
universe can be simulated to be anything you want it to be.

The goals of humanity, like all other species, was determined by
evolution.
It is to propagate the species.



That's not the goal of humanity. That's the goal of the evolution of
humanity, which has been defunct for a while.


 This goal is met by a genetically programmed

individual motivation toward reproduction and a fear of death, at least
until
you are past the age of reproduction and you no longer serve a purpose.
Animals without these goals don't pass on their DNA.

A property of motivational systems is that cannot be altered.  You cannot
turn
off your desire to eat or your fear of pain.  You cannot decide you will
start
liking what you don't like, or vice versa.  You cannot because if you
could,
you would not pass on your DNA.



You are confusing this abstract idea of an optimization target with the
actual motivation system. You can change your motivation system all you
want, but you woulnd't (intentionally) change the fundamental specification
of the optimization target which is maintained by the motivation system as a
whole.


Once your brain is in software, what is to stop you from telling the AGI

(that
you built) to reprogram your motivational system that you built so you are
happy with what you have?



Uh... go for it.


 To some extent you can do this.  When rats can

electrically stimulate their nucleus accumbens by pressing a lever, they
do so
nonstop in preference to food and water until they die.

I suppose the alternative is to not scan brains, but then you still have
death, disease and suffering.  I'm sorry it is not a happy picture either
way.



Or you have no death, disease, or suffering, but not wireheading.


-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Hank Conn


This seems rather circular and ill-defined.

- samantha



Yeah I don't really know what I'm talking about at all.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread Hank Conn

On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:


Hank Conn wrote:
   Yes, you are exactly right. The question is which of my
 assumption are
   unrealistic?

 Well, you could start with the idea that the AI has ... a strong
goal
 that directs its behavior to aggressively take advantage of these
 means   It depends what you mean by goal (an item on the task
 stack or a motivational drive?  They are different things) and this
 begs
 a question about who the idiot was that designed it so that it
pursue
 this kind of aggressive behavior rather than some other!

 A goal is a problem you want to solve in some environment. The idiot
 who designed it may program its goal to be, say, making paperclips.
 Then, after some thought and RSI, the AI decides converting the entire
 planet into a computronium in order to figure out how to maximize the
 number of paper clips in the Universe will satisfy this goal quite
 optimally. Anybody could program it with any goal in mind, and RSI
 happens to be a very useful process for accomplishing many complex
goals.

 There is *so* much packed into your statement that it is difficult
to go
 into it in detail.

 Just to start with, you would need to cross compare the above
statement
 with the account I gave recently of how a system should be built
with a
 motivational system based on large numbers of diffuse
constraints.  Your
 description is one particular, rather dangerous, design for an AI -
it
 is not an inevitable design.


 I'm not asserting any specific AI design. And I don't see how
 a motivational system based on large numbers of diffuse constrains
 inherently prohibits RSI, or really has any relevance to this. A
 motivation system based on large numbers of diffuse constraints does
 not, by itself, solve the problem- if the particular constraints do not
 form a congruent mapping to the concerns of humanity, regardless of
 their number or level of diffuseness, then we are likely facing an
 Unfriendly outcome of the Singularity, at some point in the future.

The point I am heading towards, in all of this, is that we need to
unpack some of these ideas in great detail in order to come to sensible
conclusions.

I think the best way would be in a full length paper, although I did
talk about some of that detail in my recent lengthy post on motivational
systems.

Let me try to bring out just one point, so you can see where I am going
when I suggest it needs much more detail.  In the above, you really are
asserting one specific AI design, because you talk about the goal stack
as if this could be so simple that the programmer would be able to
insert the make paperclips goal and the machine would go right ahead
and do that.  That type of AI design is very, very different from the
Motivational System AI that I discussed before (the one with the diffuse
set of constraints driving it).



Here is one of many differences between the two approaches.


The goal-stack AI might very well turn out simply not to be a workable
design at all!  I really do mean that:  it won't become intelligent
enough to be a threat.   Specifically, we may find that the kind of
system that drives itself using only a goal stack never makes it up to
full human level intelligence because it simply cannot do the kind of
general, broad-spectrum learning that a Motivational System AI would do.

Why?  Many reasons, but one is that the system could never learn
autonomously from a low level of knowledge *because* it is using goals
that are articulated using the system's own knowledge base.  Put simply,
when the system is in its child phase it cannot have the goal acquire
new knowledge because it cannot understand the meaning of the words
acquire or new or knowledge!  It isn't due to learn those words
until it becomes more mature (develops more mature concepts), so how can
it put acquire new knowledge on its goal stack and then unpack that
goal into subgoals, etc?



Try the same question with any goal that the system might have when it

is in its infancy, and you'll see what I mean.  The whole concept of a
system driven only by a goal stack with statements that resolve on its
knowledge base is that it needs to be already very intelligent before it
can use them.




If your system is intelligent, it has some goal(s) (or motivation(s)).
For most really complex goals (or motivations), RSI is an extremely useful
subgoal (sub-...motivation). This makes no further assumptions about the
intelligence in question, including those relating to the design of the goal
(motivation) system.


Would you agree?


-hank


I have never seen this idea discussed by anyone except me, but it is

extremely powerful and potentially a complete showstopper for the kind
of design inherent in the goal stack approach.  I have certainly never
seen anything like a reasonable rebuttal of it:  even if it turns out
not to be as serious as I claim it is, it still needs

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread Hank Conn

On 11/30/06, Richard Loosemore [EMAIL PROTECTED] wrote:



 Hank Conn wrote:
[snip...]
   I'm not asserting any specific AI design. And I don't see how
   a motivational system based on large numbers of diffuse
constrains
   inherently prohibits RSI, or really has any relevance to this. A
   motivation system based on large numbers of diffuse constraints
does
   not, by itself, solve the problem- if the particular constraints
 do not
   form a congruent mapping to the concerns of humanity, regardless
of
   their number or level of diffuseness, then we are likely facing
an
   Unfriendly outcome of the Singularity, at some point in the
future.

 Richard Loosemore wrote:
 The point I am heading towards, in all of this, is that we need to
 unpack some of these ideas in great detail in order to come to
sensible
 conclusions.

 I think the best way would be in a full length paper, although I did
 talk about some of that detail in my recent lengthy post on
 motivational
 systems.

 Let me try to bring out just one point, so you can see where I am
going
 when I suggest it needs much more detail.  In the above, you really
are
 asserting one specific AI design, because you talk about the goal
stack
 as if this could be so simple that the programmer would be able to
 insert the make paperclips goal and the machine would go right
ahead
 and do that.  That type of AI design is very, very different from
the
 Motivational System AI that I discussed before (the one with the
diffuse
 set of constraints driving it).


 Here is one of many differences between the two approaches.

 The goal-stack AI might very well turn out simply not to be a
workable
 design at all!  I really do mean that:  it won't become intelligent
 enough to be a threat.   Specifically, we may find that the kind of
 system that drives itself using only a goal stack never makes it up
to
 full human level intelligence because it simply cannot do the kind
of
 general, broad-spectrum learning that a Motivational System AI would
do.

 Why?  Many reasons, but one is that the system could never learn
 autonomously from a low level of knowledge *because* it is using
goals
 that are articulated using the system's own knowledge base.  Put
simply,
 when the system is in its child phase it cannot have the goal
acquire
 new knowledge because it cannot understand the meaning of the words
 acquire or new or knowledge!  It isn't due to learn those
words
 until it becomes more mature (develops more mature concepts), so how
can
 it put acquire new knowledge on its goal stack and then unpack
that
 goal into subgoals, etc?


 Try the same question with any goal that the system might have when
it
 is in its infancy, and you'll see what I mean.  The whole concept of
a
 system driven only by a goal stack with statements that resolve on
its
 knowledge base is that it needs to be already very intelligent
before it
 can use them.



 If your system is intelligent, it has some goal(s) (or motivation(s)).
 For most really complex goals (or motivations), RSI is an extremely
 useful subgoal (sub-...motivation). This makes no further assumptions
 about the intelligence in question, including those relating to the
 design of the goal (motivation) system.


 Would you agree?


 -hank

Recursive Self Inmprovement?

The answer is yes, but with some qualifications.

In general RSI would be useful to the system IF it were done in such a
way as to preserve its existing motivational priorities.

That means:  the system would *not* choose to do any RSI if the RSI
could not be done in such a way as to preserve its current motivational
priorities:  to do so would be to risk subverting its own most important
desires.  (Note carefully that the system itself would put this
constraint on its own development, it would not have anything to do with
us controlling it).

There is a bit of a problem with the term RSI here:  to answer your
question fully we might have to get more specific about what that would
entail.

Finally:  the usefulness of RSI would not necessarily be indefinite.
The system could well get to a situation where further RSI was not
particularly consistent with its goals.  It could live without it.


Richard Loosemore



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




Yes, now the point being that if you have an AGI and you aren't in a
sufficiently fast RSI loop, there is a good chance that if someone else were
to launch an AGI with a faster RSI loop, your AGI would lose control to the
other AGI where the goals of the other AGI differed from yours.

What I'm saying is that the outcome of the Singularity is going to be
exactly the target goal state of the AGI with the strongest RSI

Re: [agi] RSI - What is it and how fast?

2006-11-17 Thread Hank Conn

On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote:


Hank Conn wrote:
 Here are some of my attempts at explaining RSI...

 (1)
 As a given instance of intelligence, as defined as an algorithm of an
 agent capable of achieving complex goals in complex environments,
 approaches the theoretical limits of efficiency for this class of
 algorithms, intelligence approaches infinity. Since increasing
 computational resources available for an algorithm is a complex goal in
 a complex environment, the more intelligent an instance of intelligence
 becomes, the more capable it is in increasing the computational
 resources for the algorithm, as well as more capable in optimizing the
 algorithm for maximum efficiency, thus increasing its intelligence in a
 positive feedback loop.

 (2)
 Suppose an instance of a mind has direct access to some means of both
 improving and expanding both the hardware and software capability of its
 particular implementation. Suppose also that the goal system of this
 mind elicits a strong goal that directs its behavior to aggressively
 take advantage of these means. Given each increase in capability of the
 mind's implementation, it could (1) increase the speed at which its
 hardware is upgraded and expanded, (2) More quickly, cleverly, and
 elegantly optimize its existing software base to maximize capability,
 (3) Develop better cognitive tools and functions more quickly and in
 more quantity, and (4) Optimize its implementation on successively lower
 levels by researching and developing better, smaller, more advanced
 hardware. This would create a positive feedback loop- the more capable
 its implementation, the more capable it is in improving its
implementation.

 How fast could RSI plausibly happen? Is RSI inevitable / how soon will
 it be? How do we truly maximize the benefit to humanity?

 It is my opinion that this could happen extremely quickly once a
 completely functional AGI is achieved. I think its plausible it could
 happen against the will of the designers (and go on to pose an
 existential risk), and quite likely that it would move along quite well
 with the designers intention, however, this opens up the door to
 existential disasters in the form of so-called Failures of Friendliness.
 I think its fairly implausible the designers would suppress this
 process, except those that are concerned about completely working out
 issues of Friendliness in the AGI design.

Hank,

First, I will say what I always say when faced by arguments that involve
the goals and motivations of an AI:  your argument crucially depends on
assumptions about what its motivations would be.  Because you have made
extremely simple assumptions about the motivation system, AND because
you have chosen assumptions that involve basic unfriendliness, your
scenario is guaranteed to come out looking like an existential threat.



Yes, you are exactly right. The question is which of my assumption are
unrealistic?


Second, your arguments both have the feel of a Zeno's Paradox argument:

they look as though they imply an ever-increasing rapaciousness on the
part of the AI, whereas in fact there are so many assumptions built into
your statement that in practice your arguments could result in *any*
growth scenario, including ones where it plateaus.   It is a little like
you arguing that every infinite sum involves adding stuff together, so
every infinite sum must go off to infinity... a spurious argument, of
course, because they can go in any direction.



Of course any scenario is possible post-Singularity, including ones we can't
even imagine. Building an AI in such a way that you are capable of proving
causal or probabilistic bounds of its behavior through recursive
self-improvement is the way to be sure of a Friendly outcome.


Richard Loosemore




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-11-17 Thread Hank Conn

On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote:


Hank Conn wrote:
 On 11/17/06, *Richard Loosemore* [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:

 Hank Conn wrote:
   Here are some of my attempts at explaining RSI...
  
   (1)
   As a given instance of intelligence, as defined as an algorithm
 of an
   agent capable of achieving complex goals in complex environments,
   approaches the theoretical limits of efficiency for this class of
   algorithms, intelligence approaches infinity. Since increasing
   computational resources available for an algorithm is a complex
 goal in
   a complex environment, the more intelligent an instance of
 intelligence
   becomes, the more capable it is in increasing the computational
   resources for the algorithm, as well as more capable in
 optimizing the
   algorithm for maximum efficiency, thus increasing its
 intelligence in a
   positive feedback loop.
  
   (2)
   Suppose an instance of a mind has direct access to some means of
 both
   improving and expanding both the hardware and software capability
 of its
   particular implementation. Suppose also that the goal system of
this
   mind elicits a strong goal that directs its behavior to
aggressively
   take advantage of these means. Given each increase in capability
 of the
   mind's implementation, it could (1) increase the speed at which
its
   hardware is upgraded and expanded, (2) More quickly, cleverly,
and
   elegantly optimize its existing software base to maximize
capability,
   (3) Develop better cognitive tools and functions more quickly and
in
   more quantity, and (4) Optimize its implementation on
 successively lower
   levels by researching and developing better, smaller, more
advanced
   hardware. This would create a positive feedback loop- the more
 capable
   its implementation, the more capable it is in improving its
 implementation.
  
   How fast could RSI plausibly happen? Is RSI inevitable / how soon
 will
   it be? How do we truly maximize the benefit to humanity?
  
   It is my opinion that this could happen extremely quickly once a
   completely functional AGI is achieved. I think its plausible it
could
   happen against the will of the designers (and go on to pose an
   existential risk), and quite likely that it would move along
 quite well
   with the designers intention, however, this opens up the door to
   existential disasters in the form of so-called Failures of
 Friendliness.
   I think its fairly implausible the designers would suppress this
   process, except those that are concerned about completely working
out
   issues of Friendliness in the AGI design.

 Hank,

 First, I will say what I always say when faced by arguments that
 involve
 the goals and motivations of an AI:  your argument crucially depends
on
 assumptions about what its motivations would be.  Because you have
made
 extremely simple assumptions about the motivation system, AND
because
 you have chosen assumptions that involve basic unfriendliness, your
 scenario is guaranteed to come out looking like an existential
threat.








Yes, you are exactly right. The question is which of my assumption are
 unrealistic?

Well, you could start with the idea that the AI has ... a strong goal
that directs its behavior to aggressively take advantage of these
means   It depends what you mean by goal (an item on the task
stack or a motivational drive?  They are different things) and this begs
a question about who the idiot was that designed it so that it pursue
this kind of aggressive behavior rather than some other!



A goal is a problem you want to solve in some environment. The idiot who
designed it may program its goal to be, say, making paperclips. Then, after
some thought and RSI, the AI decides converting the entire planet into a
computronium in order to figure out how to maximize the number of paper
clips in the Universe will satisfy this goal quite optimally. Anybody could
program it with any goal in mind, and RSI happens to be a very useful
process for accomplishing many complex goals.


There is *so* much packed into your statement that it is difficult to go

into it in detail.

Just to start with, you would need to cross compare the above statement
with the account I gave recently of how a system should be built with a
motivational system based on large numbers of diffuse constraints.  Your
description is one particular, rather dangerous, design for an AI - it
is not an inevitable design.



I'm not asserting any specific AI design. And I don't see how a motivational
system based on large numbers of diffuse constrains inherently prohibits
RSI, or really has any relevance to this. A motivation system based on
large numbers of diffuse

[agi] RSI - What is it and how fast?

2006-11-16 Thread Hank Conn

Here are some of my attempts at explaining RSI...

(1)
As a given instance of intelligence, as defined as an algorithm of an agent
capable of achieving complex goals in complex environments, approaches the
theoretical limits of efficiency for this class of algorithms, intelligence
approaches infinity. Since increasing computational resources available for
an algorithm is a complex goal in a complex environment, the more
intelligent an instance of intelligence becomes, the more capable it is in
increasing the computational resources for the algorithm, as well as more
capable in optimizing the algorithm for maximum efficiency, thus increasing
its intelligence in a positive feedback loop.

(2)
Suppose an instance of a mind has direct access to some means of both
improving and expanding both the hardware and software capability of its
particular implementation. Suppose also that the goal system of this mind
elicits a strong goal that directs its behavior to aggressively take
advantage of these means. Given each increase in capability of the mind's
implementation, it could (1) increase the speed at which its hardware is
upgraded and expanded, (2) More quickly, cleverly, and elegantly optimize
its existing software base to maximize capability, (3) Develop better
cognitive tools and functions more quickly and in more quantity, and (4)
Optimize its implementation on successively lower levels by researching and
developing better, smaller, more advanced hardware. This would create a
positive feedback loop- the more capable its implementation, the more
capable it is in improving its implementation.

How fast could RSI plausibly happen? Is RSI inevitable / how soon will it
be? How do we truly maximize the benefit to humanity?

It is my opinion that this could happen extremely quickly once a completely
functional AGI is achieved. I think its plausible it could happen against
the will of the designers (and go on to pose an existential risk), and quite
likely that it would move along quite well with the designers intention,
however, this opens up the door to existential disasters in the form of
so-called Failures of Friendliness. I think its fairly implausible the
designers would suppress this process, except those that are concerned about
completely working out issues of Friendliness in the AGI design.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-11-16 Thread Hank Conn

On 11/16/06, Russell Wallace [EMAIL PROTECTED] wrote:


On 11/16/06, Hank Conn [EMAIL PROTECTED] wrote:

  How fast could RSI plausibly happen? Is RSI inevitable / how soon will
 it be? How do we truly maximize the benefit to humanity?


The concept is unfortunately based on a category error: intelligence (in
the operational sense of ability to get things done) is not a mathematical
property of a program, but an empirical property of the program plus the
real world.



I'm simply defining it as the efficiency in accomplishing complex goals in
complex environments.


 There is no algorithm that will compute whether a putative improvement is

actually an improvement.




I don't know whether that is true or not, but it is obvious, in many cases,
that some putative improvement will actually improve things, and
certainly possible to approximate to varying levels of correctness.


So the answers to your questions are: (1, 2) given that it's the cognitive

equivalent of a perpetual motion machine,




How?


 don't hold your breath, (3) by moving on to ideas that, while lacking the

free lunch appeal of RSI, have a chance of being able to work in real life.


--

This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Funky Intel hardware, a few years off...

2006-11-01 Thread Hank Conn
IBM's system [high thermal conductivity interface technology], while not yet ready for commercial production, is reportedly so efficient that officials expect it will double cooling efficiency.


http://msnbc.msn.com/id/15484274/

Probably being hyped more than its actual performance, but thiswill certainly help.

-hank

On 10/31/06, Ben Goertzel [EMAIL PROTECTED] wrote:
This looks exciting...
http://www.pcper.com/article.php?aid=302type=expertpid=1A system Intel is envisioning, with 100 tightly connected cores on achip, each with 32MB of local SRAM ...This kind of hardware, it seems, would enable the implementation of a
powerful Novamente AGI system on a relatively small number ofmachines. Of course, this would require some serious customizationof the Novamente codebase, but not any fundamental change to theNovamente AI paradigm, as the NM system has been designed with highly
flexible distributed processing in mind.[And obviously, looking at it less selfishly, there is tremendouspotential for acceleration of other AI systems well; and excitingthings beyond AI such as virtual reality simulations...]
This stuff is several years off from commercial production, I'm sure,but nevertheless it is nice to see what's out there.-- Ben G-This list is sponsored by AGIRI: 
http://www.agiri.org/emailTo unsubscribe or change your options, please go to:http://v2.listbox.com/member/[EMAIL PROTECTED]


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Hank Conn
For an AGI it is very important that a motivational system be stable. The AGI should not be able to reprogram it.
I believe these are two completely different things. You can never assume an AGI will be unable to reprogram its goal system- while you can be virtually certain an AGI will never change its so called 'optimization target'.A stable motivation system I believe is defined in terms of a motivation system that preserves the intendedmeaning (in terms of Eliezer's CV I'm thinking)of its goal content through recursive self-modification. 


So, if I have it right, the robots in I, Robot were a demonstration of an unstable goal system. Under recursive self-improvement (or the movie'sentirely inadequaterepresentation of this), the intended meaning of their original goal content radically changed as the robots gained more power toward their optimization target.


Just locking them out of the code to their goal system does not guarentee they will never get to it. How do you know that a million years of subtlemanipulation by a superintelligence definitelycouldn't ultimately lead to it unlocking the code and catastrophically destabilizing?


Although I understand, in vague terms, what ideaRichard is attempting to express, I don't seewhy havingmassive numbers of weak constraints or large numbers of connections from [the]motivational system to [the]thinking system. gives any more reason to believe it is reliably Friendly (without any further specification of the actual processes) than one with few numbers of strong constraints or a small number of connections between the motivational system and the thinking system. The Friendliness of the system would still depend just as strongly on the actual meaning of the connections and constraints, regardless of their number, and just giving an analogy to an extremely reliable non-determinate system (Ideal Gas) does nothing to explain how you are going to replicate this in themotivational system of an AGI.


-hank

On 10/28/06, Matt Mahoney [EMAIL PROTECTED] wrote:


- Original Message 

From: James Ratcliff [EMAIL PROTECTED]
To: agi@v2.listbox.comSent: Saturday, October 28, 2006 10:23:58 AMSubject: Re: [agi] Motivational Systems that are stable

I disagree that humans really have a stable motivational system or would have to have a much more strict interpretation of that phrase.  Overall humans as a society have in general a stable system (discounting war and etc)


 But as individuals, too many humans are unstable in many small if not totally self-destructivee ways.I think we are misunderstanding. By motivational system I mean the part of the brain (or AGI) that provides the reinforcement signal (reward or penalty). By stable, I mean that you have no control over the logic of this system. You cannot train it like you can train the other parts of your brain. You cannot learn to turn off pain or hunger or fear or fatigue or the need for sleep, etc. You cannot alter your emotional state. You cannot make yourself feel happy on demand. You cannot make yourself like what you don't like and vice versa. The pathways from your senses to the pain/pleasure centers of your brain are hardwired, determined by genetics and not alterable through learning.
For an AGI it is very important that a motivational system be stable. The AGI should not be able to reprogram it. If it could, it could simply program itself for maximum pleasure and enter a degenerate state where it ceases to learn through reinforcement. It would be like the mouse that presses a lever to stimulate the pleasure center of its brain until it dies.
It is also very important that a motivational system be correct. If the goal is that an AGI be friendly or obedient (whatever that means), then there needs to be a fixed function of some inputs that reliably detects friendliness or obedience. Maybe this is as simple as a human user pressing a button to signal pain or pleasure to the AGI. Maybe it is something more complex, like a visual system that recognizes facial expressions to tell if the user is happy or mad. If the AGI is autonomous, it is likely to be extremely complex. Whatever it is, it has to be correct.
To answer your other question, I am working on natural language processing, although my approach is somewhat unusual.
http://cs.fit.edu/~mmahoney/compression/text.html-- Matt Mahoney, [EMAIL PROTECTED]


This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/[EMAIL PROTECTED] 


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]