--- On Fri, 8/29/08, Mike Tintner [EMAIL PROTECTED] wrote:
From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] Re: Goedel machines ..PS
To: agi@v2.listbox.com
Date: Friday, August 29, 2008, 3:53 PM
Ben,
...
If RSI were possible, then you should see some
signs of it within human
Charles,
It's a good example. What it also brings out is the naive totalitarian premises
of RSI - the implicit premise that you can comprehensively standardise your
ways to represent and solve problems about the world, (as well as the domains
of the world itself). [This BTW has been the
I suspect that there's minimal value in thinking about mundane 'self
improvement' (e.g. among humans or human institutions) in an attempt to
understand AGI-RSI, and that thinking about 'weak RSI' (e.g. in a GA system
or some other non-self-aware system) has value, but only insofar as it can
On Sat, Aug 30, 2008 at 8:54 AM, David Hart [EMAIL PROTECTED] wrote:
I suspect that there's minimal value in thinking about mundane 'self
improvement' (e.g. among humans or human institutions) in an attempt to
understand AGI-RSI,
Yes. To make a somewhat weak analogy, it's somewhat like
On Thu, Aug 28, 2008 at 8:29 PM, Terren Suydam [EMAIL PROTECTED] wrote:
Vladimir Nesov [EMAIL PROTECTED] wrote:
AGI doesn't do anything with the question, you do. You
answer the
question by implementing Friendly AI. FAI is the answer to
the
question.
The question is: how could one
2008/8/29 Ben Goertzel [EMAIL PROTECTED]:
About recursive self-improvement ... yes, I have thought a lot about it, but
don't have time to write a huge discourse on it here
One point is that if you have a system with N interconnected modules, you
can approach RSI by having the system
On Sat, Aug 30, 2008 at 10:06 AM, William Pearson [EMAIL PROTECTED]wrote:
2008/8/29 Ben Goertzel [EMAIL PROTECTED]:
About recursive self-improvement ... yes, I have thought a lot about it,
but
don't have time to write a huge discourse on it here
One point is that if you have a system
On Thu, Aug 28, 2008 at 9:08 PM, Terren Suydam [EMAIL PROTECTED] wrote:
--- On Wed, 8/27/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
One of the main motivations for the fast development of
Friendly AI is
that it can be allowed to develop superintelligence to
police the
human space from
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:
On Sat, Aug 30, 2008 at 10:06 AM, William Pearson [EMAIL PROTECTED]
wrote:
2008/8/29 Ben Goertzel [EMAIL PROTECTED]:
About recursive self-improvement ... yes, I have thought a lot about it,
but
don't have time to write a huge discourse on it
Isn't it an evolutionary stable strategy for the modification system
module to change to a state where it does not change itself?1
Not if the top-level goals are weighted toward long-term growth
Let me
give you a just so story and you can tell me whether you think it
likely. I'd be
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:
Isn't it an evolutionary stable strategy for the modification system
module to change to a state where it does not change itself?1
Not if the top-level goals are weighted toward long-term growth
Let me
give you a just so story and you can tell
Have you implemented a long term growth goal atom yet?
Nope, right now we're just playing with virtual puppies, who aren't
really explicitly concerned with long-term growth
(plus of course various narrow-AI-ish applications of OpenCog components...)
Don't they have
to specify a specific
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:
Don't they have
to specify a specific state? Or am I reading
http://opencog.org/wiki/OpenCogPrime:GoalAtom wrong?
They don't have to specify a specific state. A goal could
be some PredicateNode P expressing an abstract evaluation of
state,
***
So it could be a specific set of states? To specify long term growth
as a goal, wouldn't you need to be able to do an abstract evaluation
of how the state *changes* rather than just the current state?
***
yes, and of course a GroundedPredicateNode could do that too ... the system
can recall
--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
You start with what is right? and end with
Friendly AI, you don't
start with Friendly AI and close the circular
argument. This doesn't
answer the question, but it defines Friendly AI and thus
Friendly AI
(in terms of right).
In
--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
Won't work, Moore's law is ticking, and one day a
morally arbitrary
self-improving optimization will go FOOM. We have to try.
I wish I had a response to that. I wish I could believe it was even possible.
To me, this is like saying
About Friendly AI..
Let me put it this way: I would think anyone in a position to offer funding
for this kind of work would require good answers to the above.
Terren
My view is a little different. I think these answers are going to come out
of a combination of theoretical advances with
I agree with that to the extent that theoretical advances could address the
philosophical objections I am making. But until those are dealt with,
experimentation is a waste of time and money.
If I was talking about how to build faster-than-lightspeed travel, you would
want to know how I plan
Hi,
Your philosophical objections aren't really objections to my perspective, so
far as I have understood so far...
What you said is
I've been saying that Friendliness is impossible to implement because 1)
it's a moving target (as in, changes through time), since 2) its definition
is dependent
On Sat, Aug 30, 2008 at 8:54 PM, Terren Suydam [EMAIL PROTECTED] wrote:
--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
You start with what is right? and end with
Friendly AI, you don't
start with Friendly AI and close the circular
argument. This doesn't
answer the question,
comments below...
[BG]
Hi,
Your philosophical objections aren't really objections to my perspective, so
far as I have understood so far...
[TS]
Agreed. They're to the Eliezer perspective that Vlad is arguing for.
[BG]
I don't plan to hardwire beneficialness (by which I may not mean precisely
On Sat, Aug 30, 2008 at 9:18 PM, Terren Suydam [EMAIL PROTECTED] wrote:
--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
Given the psychological unity of humankind, giving the
focus of
right to George W. Bush personally will be
enormously better for
everyone than going in any
[BG]
I do however plan to hardwire **a powerful, super-human capability for
empathy** ... and a goal-maintenance system hardwired toward **stability of
top-level goals under self-modification**. But I agree this is different
from hardwiring specific goal content ... though it strongly
On Sat, Aug 30, 2008 at 9:20 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
My view is a little different. I think these answers are going to come out
of a combination of theoretical advances with lessons learned via
experimenting with early-stage AGI systems, rather than being arrived at
Hi
All who interested in such topics and are willing to endure some raw
speculative trains of thought,
may be interested in an essay I recently posted on goal-preservation in
strongly self-modifying
systems, which is linked to from this blog post
This is a good paper. Would read it again
On 8/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi
All who interested in such topics and are willing to endure some raw
speculative trains of thought,
may be interested in an essay I recently posted on goal-preservation in
strongly self-modifying
Terren Suydam [EMAIL PROTECTED] was quoted to say:
I've been saying that Friendliness is impossible to implement because 1)
it's a moving target (as in, changes through time), since 2) its definition
is dependent on context (situational context, cultural context, etc).
I think that Friendliness
You make the statement below as if it were a fact and I don't believe it to
be fact at all.
If a disembodied AGI has models suggested by an embodied person, then that
concept can have meaning in a real world setting without the AGI actually
having a body at all. If a disembodied AGI has a
On 8/29/08, David Hart [EMAIL PROTECTED] wrote:
The best we can hope for is that we participate in the construction and
guidance of future AGIs such they they are able to, eventually, invent,
perform and carefully guide RSI (and, of course, do so safely every single
step of the way without
David: I know that some systems (specifically systems without models or a
lot of
human interaction) have had grounding problems but your statement below
seems to be stating something that is far from proven fact.
Your conclusions about concept of self and unemboodied agent means
ungrounded
30 matches
Mail list logo