Edward W. Porter writes: As I say, what is, and is not, RSI would appear to be
a matter of definition. But so far the several people who have gotten back to
me, including yourself, seem to take the position that that is not the type of
recursive self improvement they consider to be RSI. Some
I wrote:
If we do not give arbitrary access to the mind model itself or its
implementation, it seems safer than if we do -- this limits the
extent that RSI is possible: the efficiency of the model implementation
and the capabilities of the model do not change.
An obvious objection to this
On 03/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
RSI is not necessary for human-level AGI.
I think it's too early to be able to make a categorical statement of
this kind. Does not a new born baby recursively improve its thought
processes until it reaches human level ?
-
This list
Good distinction!
Edward W. Porter
-Original Message-
From: Derek Zahn [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 3:22 PM
To: agi@v2.listbox.com
Subject: RE: [agi] RSI
Edward W. Porter writes:
As I say, what is, and is not, RSI would appear to be a matter
On Wed, Oct 03, 2007 at 02:09:05PM -0400, Richard Loosemore wrote:
RSI is only what happens after you get an AGI up to the human level: it
could then be used [sic] to build a more intelligent version of itself,
and so on up to some unknown plateau. That plateau is often referred to
as
On Wednesday 03 October 2007 03:47:31 pm, Bob Mottram wrote:
On 03/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
RSI is not necessary for human-level AGI.
I think it's too early to be able to make a categorical statement of
this kind. Does not a new born baby recursively improve its
On 03/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
RSI is not necessary for human-level AGI.
How about: RSI will not be possible until human-level AGI.
Specifically, the AGI will need the same skills as its builders with regard to
language understanding, system engineering, and software
On 12/21/06, Philip Goetz [EMAIL PROTECTED] wrote:
That in itself is quite bad. But what proves to me that Gould had no
interest in the scientific merits of the book is that, if he had, he
could at any time during those months have walked down one flight of
stairs and down a hall to E. O.
On 12/14/06, Charles D Hixson [EMAIL PROTECTED] wrote:
To speak of evolution as being forward or backward is to impose upon
it our own preconceptions of the direction in which it *should* be
changing. This seems...misguided.
IMHO Evolution tends to increase extropy and self-organisation. Thus
On 12/13/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
Nope. I think, for example, that the process of evolution is universal -- it
shows the key feature of exponential learning growth, but with a very slow
clock. So there're other models besides a mammalian brain.
My mental model is to
On 12/5/06, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Eric Baum [EMAIL PROTECTED] wrote:
Matt We have slowed evolution through medical advances, birth control
Matt and genetic engineering, but I don't think we have stopped it
Matt completely yet.
I don't know what reason there is to think
Philip Goetz wrote:
...
The disagreement here is a side-effect of postmodern thought.
Matt is using evolution as the opposite of devolution, whereas
Eric seems to be using it as meaning change, of any kind, via natural
selection.
We have difficulty because people with political agendas -
On 12/8/06, J. Storrs Hall [EMAIL PROTECTED] wrote:
If I had to guess, I would say the boundary is at about IQ 140, so the top 1%
of humanity is universal -- but that's pure speculation; it may well be that
no human is universal, because of inductive bias, and it takes a community to
search the
Nope. I think, for example, that the process of evolution is universal -- it
shows the key feature of exponential learning growth, but with a very slow
clock. So there're other models besides a mammalian brain.
My mental model is to ask of a given person, suppose you had a community of
10,000
Ah, perhaps you agree with Richard Westfall:
The more I have studied him, the more Newton has receded from me He has
become for me wholly other, one of the tiny handful of geniuses who have
shaped the categories of the human intellect, a man not reducible to the
criteria by which we
On Thursday 07 December 2006 05:29, Brian Atkins wrote:
The point being although this task takes only part of the human's max
abilities,
by their nature they can't split it off, automate it, or otherwise escape
letting some brain cycles go to waste. The human mind is too monolithic in
Brian Atkins wrote:
J. Storrs Hall wrote:
Actually the ability to copy skills is the key item, imho, that
separates humans from the previous smart animals. It made us a
memetic substrate. In terms of the animal kingdom, we do it very,
very well. I'm sure that AIs will be able to as well,
sam kayley wrote:
'integrable on the other end'.is a rather large issue to shove under the
carpet in five words ;)
Indeed :-)
For two AIs recently forked from a common parent, probably, but for AIs
with different 'life experiences' and resulting different conceptual
structures, why
I'm on the road, so I'll have to give short shrift to this, but I'll try to
hit a few high points:
On Monday 04 December 2006 07:55, Brian Atkins wrote:
Putting aside the speed differential which you accept, but dismiss as
important
for RSI, isn't there a bigger issue you're skipping
Small correction:
Brian Atkins wrote:
So there is some group of humans you would say don't pass your learning
universal test. Now, of the group that does pass, how big is that group
roughly? The majority of humans? (IQ 100 and above) Whatever the size of
that group, do you claim that any of
Huh that doesn't look right when I received it back. Here's a rewritten
sentence:
Whatever the size of that group, do you claim that _all_ of these learning
universalists would be capable of coming up with Einstein-class (or take your
pick) ideas if they had been in his shoes during his
--- Eric Baum [EMAIL PROTECTED] wrote:
Matt --- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote: The goals
of humanity, like all other species, was determined by
evolution. It is to propagate the species.
That's not the goal of humanity.
On 12/4/06, Brian Atkins [EMAIL PROTECTED] wrote:
Can you cause your brain to temporarily shut down your visual cortex and
other
associated visual parts, reallocate them to expanding your working memory
by
four times its current size in order to help you juggle consciously the
bits you
need to
.
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, December 03, 2006 10:19 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it
and how fast?]
--- Mark Waser [EMAIL PROTECTED] wrote:
You cannot turn off
Brian thanks for your response and Dr. Hall thanks for your post as well. I
will get around to responding to this as soon as time permits. I am
interested in what Michael Anissimov or Michael Wilson has to say.
On 12/4/06, Brian Atkins [EMAIL PROTECTED] wrote:
I think this is an interesting,
There is a needed distinctintion that must be made here about hunger as a goal
stack motivator.
We CANNOT change the hunger sensation, (short of physical manipuations, or
mind-control stuff) as it is a given sensation that comes directly from the
physical body.
What we can change is the
Ok,
Alot has been thrown around here about Top-Level goals, but no real
definition has been given, and I am confused as it seems to be covering alot of
ground for some people.
What 'level' and what are these top level goals for humans/AGI's?
It seems that Staying Alive is a big one, but that
Regarding the definition of goals and supergoals, I have made attempts at:
http://www.agiri.org/wiki/index.php/Goal
http://www.agiri.org/wiki/index.php/Supergoal
The scope of human supergoals has been moderately well articulated by
Maslow IMO:
On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote:
Why must you argue with everything I say? Is this not a sensible
statement?
I don't argue with everything you say. I only argue with things that I
believe are wrong. And no, the statements You cannot turn off hunger or
pain. You cannot
The statement, You cannot turn off hunger or pain is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so. Philosophically, it's more certain than
I think, therefore I am.
If you maintain your assertion, I'll put you in my killfile, because
we cannot
James Ratcliff wrote:
There is a needed distinctintion that must be made here about hunger
as a goal stack motivator.
We CANNOT change the hunger sensation, (short of physical
manipuations, or mind-control stuff) as it is a given sensation that
comes directly from the physical body.
What
On 12/4/06, Ben Goertzel [EMAIL PROTECTED] wrote:
The statement, You cannot turn off hunger or pain is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so. Philosophically, it's more certain than
I think, therefore I am.
If you maintain your
On 12/1/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
On Friday 01 December 2006 20:06, Philip Goetz wrote:
Thus, I don't think my ability to follow rules written on paper to
implement a Turing machine proves that the operations powering my
consciousness are Turing-complete.
Actually, I
Consider as a possible working definition:
A goal is the target state of a homeostatic system. (Don't take
homeostatic too literally, though.)
Thus, if one sets a thermostat to 70 degrees Fahrenheit, then it's goal
is to change to room temperature to be not less than 67 degrees
Fahrenheit.
?
- Original Message -
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 1:38 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it
and how fast?]
On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote:
Why must you argue
Message -
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 2:01 PM
Subject: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is
it and how fast?]
On 12/4/06, Ben Goertzel [EMAIL PROTECTED] wrote:
The statement, You cannot turn
Matt --- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote: The goals
of humanity, like all other species, was determined by
evolution. It is to propagate the species.
That's not the goal of humanity. That's the goal of the evolution
of humanity,
On 12/4/06, Philip Goetz [EMAIL PROTECTED] wrote:
If you maintain your assertion, I'll put you in my killfile, because
we cannot communicate.
Richard Loosemore told me that I'm overreacting. I can tell that I'm
overly emotional over this, so it might be true. Sorry for flaming.
I am
Ok,
That is a start, but you dont have a difference there between externally
required goals, and internally created goals.
And what smallest set of external goals do you expect to give?
Would you or not force as Top Level the Physiological (per wiki page you cited)
goals from signals,
For a baby AGI, I would force the physiological goals, yeah.
In practice, baby Novamente's only explicit goal is getting rewards
from its teacher Its other goals, such as learning new
information, are left implicit in the action of the system's internal
cognitive processes It's
On 12/2/06, Richard Loosemore [EMAIL PROTECTED] wrote:
I am disputing the very idea that monkeys (or rats or pigeons or humans)
have a part of the brain which generates the reward/punishment signal
for operant conditioning.
Well, there is a part of the brain which generates a
Mark Waser wrote:
...
For me, yes, all of those things are good since they are on my list of
goals *unless* the method of accomplishing them steps on a higher goal
OR a collection of goals with greater total weight OR violates one of
my limitations (restrictions).
...
If you put every good
, 2006 9:42 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it
and how fast?]
--- Richard Loosemore [EMAIL PROTECTED] wrote:
I am disputing the very idea that monkeys (or rats or pigeons or humans)
have a part of the brain which generates the reward/punishment signal
--- Mark Waser [EMAIL PROTECTED] wrote:
You cannot turn off hunger or pain. You cannot
control your emotions.
Huh? Matt, can you really not ignore hunger or pain? Are you really 100%
at the mercy of your emotions?
Why must you argue with everything I say? Is this not a sensible
IMO, humans **can** reprogram their top-level goals, but only with
difficulty. And this is correct: a mind needs to have a certain level
of maturity to really reflect on its own top-level goals, so that it
would be architecturally foolish to build a mind that involved
revision of supergoals at
I think this is an interesting, important, and very incomplete subject area, so
thanks for posting this. Some thoughts below.
J. Storrs Hall, PhD. wrote:
Runaway recursive self-improvement
Moore's Law, underneath, is driven by humans. Replace human
intelligence with superhuman
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
The goals of humanity, like all other species, was determined by
evolution.
It is to propagate the species.
That's not the goal of humanity.
]
To: agi@v2.listbox.com
Sent: Friday, December 01, 2006 7:52 PM
Subject: Re: [agi] RSI - What is it and how fast?
I've just finished a book on this subject, (coming out in May from
Prometheus). I also had an extended conversation/argument about it with
some smart people on another mailing list
On 12/1/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The questions you asked above are predicated on a goal stack approach.
You are repeating the same mistakes that I already dealt with.
Philip Goetz snidely responded
Some people would call it repeating the same mistakes I already dealt
On 12/2/06, Mark Waser [EMAIL PROTECTED] wrote:
Philip Goetz snidely responded
Some people would call it repeating the same mistakes I already dealt
with.
Some people would call it continuing to disagree. :)
Richard's point was that the poster was simply repeating previous points
-
From: J. Storrs Hall, PhD. [EMAIL PROTECTED]
Subject: Re: [agi] RSI - What is it and how fast?
I've just finished a book on this subject, (coming out in May from
Prometheus). ...
Thanks!
The book, under the title Beyond AI: Creating the Conscience of the Machine,
is an outgrowth
--- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
I suppose the alternative is to not scan brains, but then you still
have
death, disease and
[WAS Re: [agi] RSI - What is it
and how fast?]
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
I guess we are arguing terminology. I mean that the part of the brain
which
generates the reward/punishment signal for operant conditioning is not
trainable. It is programmed
Philip Goetz wrote:
On 12/1/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The questions you asked above are predicated on a goal stack approach.
You are repeating the same mistakes that I already dealt with.
Some people would call it repeating the same mistakes I already dealt
with.
Some
Hank Conn wrote:
Yes, now the point being that if you have an AGI and you aren't in a
sufficiently fast RSI loop, there is a good chance that if someone else
were to launch an AGI with a faster RSI loop, your AGI would lose
control to the other AGI where the goals of the other AGI differed
James Ratcliff wrote:
You could start a smaller AI with a simple hardcoded desire or reward
mechanism to learn new things, or to increase the size of its knowledge.
That would be a simple way to programmaticaly insert it. That along
with a seed AI, must be put in there in the beginning.
On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:
Recursive Self Inmprovement?
The answer is yes, but with some qualifications.
In general RSI would be useful to the system IF it were done in such
a way as to preserve its existing motivational priorities.
How could the system
On Nov 30, 2006, at 10:15 PM, Hank Conn wrote:
Yes, now the point being that if you have an AGI and you aren't in a
sufficiently fast RSI loop, there is a good chance that if someone
else were to launch an AGI with a faster RSI loop, your AGI would
lose control to the other AGI where the
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Hank Conn [EMAIL PROTECTED] wrote:
The further the actual target goal state of that particular AI is away
from
the actual target goal state of humanity, the worse.
The goal of ... humanity... is that the AGI implemented that will have
This seems rather circular and ill-defined.
- samantha
Yeah I don't really know what I'm talking about at all.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
On 12/1/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
I don't think so. The singulatarians tend to have this mental model of a
superintelligence that is essentially an analogy of the difference between an
animal and a human. My model is different. I think there's a level of
universality,
--- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
The goals of humanity, like all other species, was determined by
evolution.
It is to propagate the species.
That's not the goal of humanity. That's the goal of the evolution of
humanity, which
On Friday 01 December 2006 20:06, Philip Goetz wrote:
Thus, I don't think my ability to follow rules written on paper to
implement a Turing machine proves that the operations powering my
consciousness are Turing-complete.
Actually, I think it does prove it, since your simulation of a Turing
Samantha Atkins wrote:
On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:
Recursive Self Inmprovement?
The answer is yes, but with some qualifications.
In general RSI would be useful to the system IF it were done in such a
way as to preserve its existing motivational priorities.
Matt Mahoney wrote:
I guess we are arguing terminology. I mean that the part of the brain which
generates the reward/punishment signal for operant conditioning is not
trainable. It is programmed only through evolution.
There is no such thing. This is the kind of psychology that died out at
Matt Mahoney wrote:
--- Hank Conn [EMAIL PROTECTED] wrote:
The further the actual target goal state of that particular AI is away from
the actual target goal state of humanity, the worse.
The goal of ... humanity... is that the AGI implemented that will have the
strongest RSI curve also will
You could start a smaller AI with a simple hardcoded desire or reward
mechanism to learn new things, or to increase the size of its knowledge.
That would be a simple way to programmaticaly insert it. That along with a
seed AI, must be put in there in the beginning.
Remember we are not just
Also could both or any of you describe a little bit more the idea or your
goal-stacks and how they should/would function?
James
David Hart [EMAIL PROTECTED] wrote: On 11/30/06, Ben Goertzel [EMAIL
PROTECTED] wrote: Richard,
This is certainly true, and is why in Novamente we use a goal stack
On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Hank Conn wrote:
Yes, you are exactly right. The question is which of my
assumption are
unrealistic?
Well, you could start with the idea that the AI has ... a strong
goal
that directs its behavior to
Hank Conn wrote:
[snip...]
I'm not asserting any specific AI design. And I don't see how
a motivational system based on large numbers of diffuse constrains
inherently prohibits RSI, or really has any relevance to this. A
motivation system based on large numbers of
On 11/30/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Hank Conn wrote:
[snip...]
I'm not asserting any specific AI design. And I don't see how
a motivational system based on large numbers of diffuse
constrains
inherently prohibits RSI, or really has any relevance to
On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The goal-stack AI might very well turn out simply not to be a workable
design at all! I really do mean that: it won't become intelligent
enough to be a threat. Specifically, we may find that the kind of
system that drives itself using
Richard,
This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...
ben
On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote:
On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The goal-stack AI might very well turn out simply not to be
On 11/30/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Richard,
This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...
Ben,
Could you elaborate for the list some of the nuances between [explicit]
cognitive control and [implicit]
Hank Conn wrote:
Yes, you are exactly right. The question is which of my
assumption are
unrealistic?
Well, you could start with the idea that the AI has ... a strong goal
that directs its behavior to aggressively take advantage of these
means It depends what
Hank Conn wrote:
Here are some of my attempts at explaining RSI...
(1)
As a given instance of intelligence, as defined as an algorithm of an
agent capable of achieving complex goals in complex environments,
approaches the theoretical limits of efficiency for this class of
algorithms,
On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Hank Conn wrote:
Here are some of my attempts at explaining RSI...
(1)
As a given instance of intelligence, as defined as an algorithm of an
agent capable of achieving complex goals in complex environments,
approaches the theoretical
On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Hank Conn wrote:
On 11/17/06, *Richard Loosemore* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Hank Conn wrote:
Here are some of my attempts at explaining RSI...
(1)
As a given instance of
I think this is a topic for the singularity list, but I agree it could happen
very quickly. Right now there is more than enough computing power on the
Internet to support superhuman AGI. One possibility is that it could take the
form of a worm.
On 11/16/06, Hank Conn [EMAIL PROTECTED] wrote:
How fast could RSI plausibly happen? Is RSI inevitable / how soon will it
be? How do we truly maximize the benefit to humanity?
The concept is unfortunately based on a category error: intelligence (in the
operational sense of ability to get
On 11/16/06, Russell Wallace [EMAIL PROTECTED] wrote:
On 11/16/06, Hank Conn [EMAIL PROTECTED] wrote:
How fast could RSI plausibly happen? Is RSI inevitable / how soon will
it be? How do we truly maximize the benefit to humanity?
The concept is unfortunately based on a category error:
81 matches
Mail list logo