--- On Fri, 8/29/08, Mike Tintner [EMAIL PROTECTED] wrote:
From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] Re: Goedel machines ..PS
To: agi@v2.listbox.com
Date: Friday, August 29, 2008, 3:53 PM
Ben,
...
If RSI were possible, then you should see some
signs of it within human
Charles,
It's a good example. What it also brings out is the naive totalitarian premises
of RSI - the implicit premise that you can comprehensively standardise your
ways to represent and solve problems about the world, (as well as the domains
of the world itself). [This BTW has been the
I suspect that there's minimal value in thinking about mundane 'self
improvement' (e.g. among humans or human institutions) in an attempt to
understand AGI-RSI, and that thinking about 'weak RSI' (e.g. in a GA system
or some other non-self-aware system) has value, but only insofar as it can
On Sat, Aug 30, 2008 at 8:54 AM, David Hart [EMAIL PROTECTED] wrote:
I suspect that there's minimal value in thinking about mundane 'self
improvement' (e.g. among humans or human institutions) in an attempt to
understand AGI-RSI,
Yes. To make a somewhat weak analogy, it's somewhat like
2008/8/29 Ben Goertzel [EMAIL PROTECTED]:
About recursive self-improvement ... yes, I have thought a lot about it, but
don't have time to write a huge discourse on it here
One point is that if you have a system with N interconnected modules, you
can approach RSI by having the system
On Sat, Aug 30, 2008 at 10:06 AM, William Pearson [EMAIL PROTECTED]wrote:
2008/8/29 Ben Goertzel [EMAIL PROTECTED]:
About recursive self-improvement ... yes, I have thought a lot about it,
but
don't have time to write a huge discourse on it here
One point is that if you have a system
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:
On Sat, Aug 30, 2008 at 10:06 AM, William Pearson [EMAIL PROTECTED]
wrote:
2008/8/29 Ben Goertzel [EMAIL PROTECTED]:
About recursive self-improvement ... yes, I have thought a lot about it,
but
don't have time to write a huge discourse on it
Isn't it an evolutionary stable strategy for the modification system
module to change to a state where it does not change itself?1
Not if the top-level goals are weighted toward long-term growth
Let me
give you a just so story and you can tell me whether you think it
likely. I'd be
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:
Isn't it an evolutionary stable strategy for the modification system
module to change to a state where it does not change itself?1
Not if the top-level goals are weighted toward long-term growth
Let me
give you a just so story and you can tell
Have you implemented a long term growth goal atom yet?
Nope, right now we're just playing with virtual puppies, who aren't
really explicitly concerned with long-term growth
(plus of course various narrow-AI-ish applications of OpenCog components...)
Don't they have
to specify a specific
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:
Don't they have
to specify a specific state? Or am I reading
http://opencog.org/wiki/OpenCogPrime:GoalAtom wrong?
They don't have to specify a specific state. A goal could
be some PredicateNode P expressing an abstract evaluation of
state,
***
So it could be a specific set of states? To specify long term growth
as a goal, wouldn't you need to be able to do an abstract evaluation
of how the state *changes* rather than just the current state?
***
yes, and of course a GroundedPredicateNode could do that too ... the system
can recall
On 8/29/08, David Hart [EMAIL PROTECTED] wrote:
The best we can hope for is that we participate in the construction and
guidance of future AGIs such they they are able to, eventually, invent,
perform and carefully guide RSI (and, of course, do so safely every single
step of the way without
Dave Hart: MT:Sorry, I forgot to ask for what I most wanted to know - what
form of RSI in any specific areas has been considered?
To quote Charles Babbage, I am not able rightly to apprehend the kind of
confusion of ideas that could provoke such a question.
The best we can hope for is
About recursive self-improvement ... yes, I have thought a lot about it, but
don't have time to write a huge discourse on it here
One point is that if you have a system with N interconnected modules, you
can approach RSI by having the system separately think about how to improve
each module.
Ben,
It looks like what you've thought about is aspects of the information
processing side of RSI but not the knowledge side. IOW you have thought about
the technical side but not abouthow you progress from one domain of knowledge
about the world to another, or from one subdomain to another.
On Fri, Aug 29, 2008 at 6:53 PM, Mike Tintner [EMAIL PROTECTED]wrote:
Ben,
It looks like what you've thought about is aspects of the information
processing side of RSI but not the knowledge side. IOW you have thought
about the technical side but not abouthow you progress from one domain of
Mike Tintner wrote:
You may have noticed that AGI-ers are staggeringly resistant to learning new
domains.
Remember you are dealing with human brains. You can only write into long term
memory at a rate of 2 bits per second. :-)
AGI spans just about every field of science, from ethics to
Matt: AGI spans just about every field of science, from ethics to quantum
mechanics, child development to algorithmic information theory, genetics to
economics.
Just so. And every field of the arts. And history. And philosophy. And
technology. Including social technology. And organizational
Sorry, I forgot to ask for what I most wanted to know - what form of RSI in
any specific areas has been considered?
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your
On 8/29/08, Mike Tintner [EMAIL PROTECTED] wrote:
Sorry, I forgot to ask for what I most wanted to know - what form of RSI in
any specific areas has been considered?
To quote Charles Babbage, I am not able rightly to apprehend the kind of
confusion of ideas that could provoke such a question.
21 matches
Mail list logo