On 12/21/06, Philip Goetz [EMAIL PROTECTED] wrote:
That in itself is quite bad. But what proves to me that Gould had no
interest in the scientific merits of the book is that, if he had, he
could at any time during those months have walked down one flight of
stairs and down a hall to E. O.
On 12/14/06, Charles D Hixson [EMAIL PROTECTED] wrote:
To speak of evolution as being forward or backward is to impose upon
it our own preconceptions of the direction in which it *should* be
changing. This seems...misguided.
IMHO Evolution tends to increase extropy and self-organisation. Thus
On 12/5/06, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Eric Baum [EMAIL PROTECTED] wrote:
Matt We have slowed evolution through medical advances, birth control
Matt and genetic engineering, but I don't think we have stopped it
Matt completely yet.
I don't know what reason there is to think
Philip Goetz wrote:
...
The disagreement here is a side-effect of postmodern thought.
Matt is using evolution as the opposite of devolution, whereas
Eric seems to be using it as meaning change, of any kind, via natural
selection.
We have difficulty because people with political agendas -
--- Eric Baum [EMAIL PROTECTED] wrote:
Matt --- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote: The goals
of humanity, like all other species, was determined by
evolution. It is to propagate the species.
That's not the goal of humanity.
Why must you argue with everything I say? Is this not a sensible
statement?
I don't argue with everything you say. I only argue with things that I
believe are wrong. And no, the statements You cannot turn off hunger or
pain. You cannot control your emotions are *NOT* sensible at all.
There is a needed distinctintion that must be made here about hunger as a goal
stack motivator.
We CANNOT change the hunger sensation, (short of physical manipuations, or
mind-control stuff) as it is a given sensation that comes directly from the
physical body.
What we can change is the
Ok,
Alot has been thrown around here about Top-Level goals, but no real
definition has been given, and I am confused as it seems to be covering alot of
ground for some people.
What 'level' and what are these top level goals for humans/AGI's?
It seems that Staying Alive is a big one, but that
Regarding the definition of goals and supergoals, I have made attempts at:
http://www.agiri.org/wiki/index.php/Goal
http://www.agiri.org/wiki/index.php/Supergoal
The scope of human supergoals has been moderately well articulated by
Maslow IMO:
On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote:
Why must you argue with everything I say? Is this not a sensible
statement?
I don't argue with everything you say. I only argue with things that I
believe are wrong. And no, the statements You cannot turn off hunger or
pain. You cannot
The statement, You cannot turn off hunger or pain is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so. Philosophically, it's more certain than
I think, therefore I am.
If you maintain your assertion, I'll put you in my killfile, because
we cannot
James Ratcliff wrote:
There is a needed distinctintion that must be made here about hunger
as a goal stack motivator.
We CANNOT change the hunger sensation, (short of physical
manipuations, or mind-control stuff) as it is a given sensation that
comes directly from the physical body.
What
On 12/4/06, Ben Goertzel [EMAIL PROTECTED] wrote:
The statement, You cannot turn off hunger or pain is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so. Philosophically, it's more certain than
I think, therefore I am.
If you maintain your
Consider as a possible working definition:
A goal is the target state of a homeostatic system. (Don't take
homeostatic too literally, though.)
Thus, if one sets a thermostat to 70 degrees Fahrenheit, then it's goal
is to change to room temperature to be not less than 67 degrees
Fahrenheit.
Can you not concentrate on something else enough that you no longer feel
hunger? How many people do you know that have forgotten to eat for hours
at a time when sucked into computer games or other activities?
Is the same not true of pain? Have you not heard of yogis that have
trained
To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain, is not sufficient evidence to assert that the proposition
that not everyone can turn them off completely is insensible.
The first sentence of the proposition was
Matt --- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote: The goals
of humanity, like all other species, was determined by
evolution. It is to propagate the species.
That's not the goal of humanity. That's the goal of the evolution
of humanity,
On 12/4/06, Philip Goetz [EMAIL PROTECTED] wrote:
If you maintain your assertion, I'll put you in my killfile, because
we cannot communicate.
Richard Loosemore told me that I'm overreacting. I can tell that I'm
overly emotional over this, so it might be true. Sorry for flaming.
I am
Ok,
That is a start, but you dont have a difference there between externally
required goals, and internally created goals.
And what smallest set of external goals do you expect to give?
Would you or not force as Top Level the Physiological (per wiki page you cited)
goals from signals,
For a baby AGI, I would force the physiological goals, yeah.
In practice, baby Novamente's only explicit goal is getting rewards
from its teacher Its other goals, such as learning new
information, are left implicit in the action of the system's internal
cognitive processes It's
On 12/2/06, Richard Loosemore [EMAIL PROTECTED] wrote:
I am disputing the very idea that monkeys (or rats or pigeons or humans)
have a part of the brain which generates the reward/punishment signal
for operant conditioning.
Well, there is a part of the brain which generates a
Mark Waser wrote:
...
For me, yes, all of those things are good since they are on my list of
goals *unless* the method of accomplishing them steps on a higher goal
OR a collection of goals with greater total weight OR violates one of
my limitations (restrictions).
...
If you put every good
You cannot turn off hunger or pain. You cannot
control your emotions.
Huh? Matt, can you really not ignore hunger or pain? Are you really 100%
at the mercy of your emotions?
Since the synaptic weights cannot be altered by
training (classical or operant conditioning)
Who says that
--- Mark Waser [EMAIL PROTECTED] wrote:
You cannot turn off hunger or pain. You cannot
control your emotions.
Huh? Matt, can you really not ignore hunger or pain? Are you really 100%
at the mercy of your emotions?
Why must you argue with everything I say? Is this not a sensible
IMO, humans **can** reprogram their top-level goals, but only with
difficulty. And this is correct: a mind needs to have a certain level
of maturity to really reflect on its own top-level goals, so that it
would be architecturally foolish to build a mind that involved
revision of supergoals at
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
The goals of humanity, like all other species, was determined by
evolution.
It is to propagate the species.
That's not the goal of humanity.
On 12/1/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The questions you asked above are predicated on a goal stack approach.
You are repeating the same mistakes that I already dealt with.
Philip Goetz snidely responded
Some people would call it repeating the same mistakes I already dealt
On 12/2/06, Mark Waser [EMAIL PROTECTED] wrote:
Philip Goetz snidely responded
Some people would call it repeating the same mistakes I already dealt
with.
Some people would call it continuing to disagree. :)
Richard's point was that the poster was simply repeating previous points
--- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
I suppose the alternative is to not scan brains, but then you still
have
death, disease and
He's arguing with the phrase It is programmed only through evolution.
If I'm wrong and he is not, I certainly am.
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, December 02, 2006 4:26 PM
Subject: Re: Motivational Systems of an AI [WAS
Philip Goetz wrote:
On 12/1/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The questions you asked above are predicated on a goal stack approach.
You are repeating the same mistakes that I already dealt with.
Some people would call it repeating the same mistakes I already dealt
with.
Some
Hank Conn wrote:
Yes, now the point being that if you have an AGI and you aren't in a
sufficiently fast RSI loop, there is a good chance that if someone else
were to launch an AGI with a faster RSI loop, your AGI would lose
control to the other AGI where the goals of the other AGI differed
James Ratcliff wrote:
You could start a smaller AI with a simple hardcoded desire or reward
mechanism to learn new things, or to increase the size of its knowledge.
That would be a simple way to programmaticaly insert it. That along
with a seed AI, must be put in there in the beginning.
On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:
Recursive Self Inmprovement?
The answer is yes, but with some qualifications.
In general RSI would be useful to the system IF it were done in such
a way as to preserve its existing motivational priorities.
How could the system
On Nov 30, 2006, at 10:15 PM, Hank Conn wrote:
Yes, now the point being that if you have an AGI and you aren't in a
sufficiently fast RSI loop, there is a good chance that if someone
else were to launch an AGI with a faster RSI loop, your AGI would
lose control to the other AGI where the
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Hank Conn [EMAIL PROTECTED] wrote:
The further the actual target goal state of that particular AI is away
from
the actual target goal state of humanity, the worse.
The goal of ... humanity... is that the AGI implemented that will have
This seems rather circular and ill-defined.
- samantha
Yeah I don't really know what I'm talking about at all.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
--- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
The goals of humanity, like all other species, was determined by
evolution.
It is to propagate the species.
That's not the goal of humanity. That's the goal of the evolution of
humanity, which
Samantha Atkins wrote:
On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:
Recursive Self Inmprovement?
The answer is yes, but with some qualifications.
In general RSI would be useful to the system IF it were done in such a
way as to preserve its existing motivational priorities.
Matt Mahoney wrote:
I guess we are arguing terminology. I mean that the part of the brain which
generates the reward/punishment signal for operant conditioning is not
trainable. It is programmed only through evolution.
There is no such thing. This is the kind of psychology that died out at
Matt Mahoney wrote:
--- Hank Conn [EMAIL PROTECTED] wrote:
The further the actual target goal state of that particular AI is away from
the actual target goal state of humanity, the worse.
The goal of ... humanity... is that the AGI implemented that will have the
strongest RSI curve also will
You could start a smaller AI with a simple hardcoded desire or reward
mechanism to learn new things, or to increase the size of its knowledge.
That would be a simple way to programmaticaly insert it. That along with a
seed AI, must be put in there in the beginning.
Remember we are not just
Also could both or any of you describe a little bit more the idea or your
goal-stacks and how they should/would function?
James
David Hart [EMAIL PROTECTED] wrote: On 11/30/06, Ben Goertzel [EMAIL
PROTECTED] wrote: Richard,
This is certainly true, and is why in Novamente we use a goal stack
On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Hank Conn wrote:
Yes, you are exactly right. The question is which of my
assumption are
unrealistic?
Well, you could start with the idea that the AI has ... a strong
goal
that directs its behavior to
Hank Conn wrote:
[snip...]
I'm not asserting any specific AI design. And I don't see how
a motivational system based on large numbers of diffuse constrains
inherently prohibits RSI, or really has any relevance to this. A
motivation system based on large numbers of
On 11/30/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Hank Conn wrote:
[snip...]
I'm not asserting any specific AI design. And I don't see how
a motivational system based on large numbers of diffuse
constrains
inherently prohibits RSI, or really has any relevance to
On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The goal-stack AI might very well turn out simply not to be a workable
design at all! I really do mean that: it won't become intelligent
enough to be a threat. Specifically, we may find that the kind of
system that drives itself using
Richard,
This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...
ben
On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote:
On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The goal-stack AI might very well turn out simply not to be
On 11/30/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Richard,
This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...
Ben,
Could you elaborate for the list some of the nuances between [explicit]
cognitive control and [implicit]
Hank Conn wrote:
Yes, you are exactly right. The question is which of my
assumption are
unrealistic?
Well, you could start with the idea that the AI has ... a strong goal
that directs its behavior to aggressively take advantage of these
means It depends what
50 matches
Mail list logo