Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread charles griffiths
 --- On Fri, 8/29/08, Mike Tintner [EMAIL PROTECTED] wrote:
 From: Mike Tintner [EMAIL PROTECTED]
 Subject: Re: [agi] Re: Goedel machines ..PS
 To: agi@v2.listbox.com
 Date: Friday, August 29, 2008, 3:53 PM

 Ben, 
...


 
 
 If RSI were possible, then you should see some 
signs of it within human society, of
 humans recursively self-improving - at 
however small a scale. You don't because of this
 problem of crossing and 
integrating domains. It can all be done, but laboriously and
 stumblingly not in 
some simple, formulaic way. That is culturally a very naive idea.

I hope nobody minds if I interject with a brief narrative concerning a recent 
experience. Obviously I don't speak for Ben Goertzel, or anyone else who thinks 
RSI or recognizing superior intelligence is possible.

As it happened, I was looking for a new job a while back, and landed an 
interview with a major corporate entity. When I spoke to the HR representative, 
she bemoaned the lack of hiring standards, especially for her own department. 
It's impossible, she said, As a consultant explained it to us a few years 
ago, the corporation changes with each person we hire or fire, changes into a 
related but different entity. If we measure the intelligence of a corporation 
in terms of how well suited it is to profit from its environment, my job is to 
make sure that people we hire (on average) result in the corporation becoming 
more intelligent. She looked at me for sympathy. As if all our resources were 
enough to recognize (much less plan) an entity more intelligent than 
ourselves! She had a point. What's worse, we're expected to hire new HR staff 
and provide training that will make our department more effective at hiring new 
people. I nodded. That would lead to
 recursive self improvement (RSI), which is clearly impossible. Finally she 
said I seemed like the sympathetic sort, and even though that had nothing to do 
with her worthless hiring criteria, I could have the job and start right away.

I thought about the problem later, and eventually concluded that one good HR 
strategy would be to form hundreds or thousands (millions?) of corporations 
with stochastic methods for hiring, firing, training, merging and creating 
spinoffs, perhaps using GP or MOSES or some such. Eventually, corporations 
would emerge with superior intelligence.

The alternative would be a massive cross-disciplinary effort, only imaginable 
by a super-neo-da Vinci character who's a master of psychology, mathematics, 
economics, manufacturing, politics -- essentially every field of human 
knowledge, including medical sciences, history and the arts.

I guess it doesn't look too hopeful, so we're probably going to be stuck with 
hiring, firing and training practices that mean absolutely nothing, forever.

Charles Griffiths





  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread Mike Tintner
Charles,

It's a good example. What it also brings out is the naive totalitarian premises 
of RSI - the implicit premise that you can comprehensively standardise your 
ways to represent and solve problems about the world,   (as well as the domains 
of the world itself). [This BTW has been the implicit premise of literate, 
rational culture since Plato].

The reason we encourage and foster competition in society - and competing, 
diverse companies and approaches - is that we realise that 
competition/diversity is a fundamental part of evolution, at every level, and 
necessary to keep developing better solutions to the problems of life.

What cog sci and AI haven't realised is that humans are also individually 
designed competitively with conflicting emotions and ideas and ways of 
thinking inside themselves -  a necessary structure for an AGI. And such 
conflict inevitably stands in the way of any RSI.

It'd be interesting to have Minsky's input here, because one thing he stands 
for is the principle that human/general minds have to be built kludge-ily with 
many different ways to think - different knowledge systems. We clearly aren't 
meant to - and simply can't - think, for example, just logically and 
mathematically. Evolution and human evolution/history have relentlessly built 
up these GI's with ever more complex repertoires of knowledge representation 
and sensors, because it's a good and necessary principle  - the more complex 
you want your interactions with the world to be. 






Charles/MT: If RSI were possible, then you should see some signs of 
it within human society, of
 humans recursively self-improving - at however small a scale. You 
don't because of this
 problem of crossing and integrating domains. It can all be done, but 
laboriously and
 stumblingly not in some simple, formulaic way. That is culturally a 
very naive idea.

I hope nobody minds if I interject with a brief narrative concerning a 
recent experience. Obviously I don't speak for Ben Goertzel, or anyone else who 
thinks RSI or recognizing superior intelligence is possible.

As it happened, I was looking for a new job a while back, and landed an 
interview with a major corporate entity. When I spoke to the HR representative, 
she bemoaned the lack of hiring standards, especially for her own department. 
It's impossible, she said, As a consultant explained it to us a few years 
ago, the corporation changes with each person we hire or fire, changes into a 
related but different entity. If we measure the intelligence of a corporation 
in terms of how well suited it is to profit from its environment, my job is to 
make sure that people we hire (on average) result in the corporation becoming 
more intelligent. She looked at me for sympathy. As if all our resources were 
enough to recognize (much less plan) an entity more intelligent than 
ourselves! She had a point. What's worse, we're expected to hire new HR staff 
and provide training that will make our department more effective at hiring new 
people. I nodded. That would lead to recursive self improvement (RSI), which 
is clearly impossible. Finally she said I seemed like the sympathetic sort, and 
even though that had nothing to do with her worthless hiring criteria, I could 
have the job and start right away.

I thought about the problem later, and eventually concluded that one 
good HR strategy would be to form hundreds or thousands (millions?) of 
corporations with stochastic methods for hiring, firing, training, merging and 
creating spinoffs, perhaps using GP or MOSES or some such. Eventually, 
corporations would emerge with superior intelligence.

The alternative would be a massive cross-disciplinary effort, only 
imaginable by a super-neo-da Vinci character who's a master of psychology, 
mathematics, economics, manufacturing, politics -- essentially every field of 
human knowledge, including medical sciences, history and the arts.

I guess it doesn't look too hopeful, so we're probably going to be 
stuck with hiring, firing and training practices that mean absolutely nothing, 
forever.

Charles Griffiths



   
   
   



--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread David Hart
I suspect that there's minimal value in thinking about mundane 'self
improvement' (e.g. among humans or human institutions) in an attempt to
understand AGI-RSI, and that thinking about 'weak RSI' (e.g. in a GA system
or some other non-self-aware system) has value, but only insofar as it can
contribute to an AGI-RSI system (e.g. the mechanics of Combo in OpenCog).
Drawing the conclusion that strong RSI is impossible because it has not yet
been observed is absurd, because there's no known system in existence today
that is capable of strong RSI. A system capable of strong RSI must have
broad abilities to deeply understand, reprogram and recompile its
constituent parts before it can strongly recursively self improve, that is,
before it can create improved versions of itself (potentially heavily
modified versions that must demonstrate their superior fitness in a
competitive environment) where the unique creations repeat the process to
yield yet greater improvements ad infinitum.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread Ben Goertzel
On Sat, Aug 30, 2008 at 8:54 AM, David Hart [EMAIL PROTECTED] wrote:


 I suspect that there's minimal value in thinking about mundane 'self
 improvement' (e.g. among humans or human institutions) in an attempt to
 understand AGI-RSI,



Yes.  To make a somewhat weak analogy, it's somewhat like thinking about
people jumping up and down in the air, in order to understand interstellar
travel ;-p



 and that thinking about 'weak RSI' (e.g. in a GA system or some other
 non-self-aware system) has value, but only insofar as it can contribute to
 an AGI-RSI system (e.g. the mechanics of Combo in OpenCog). Drawing the
 conclusion that strong RSI is impossible because it has not yet been
 observed is absurd, because there's no known system in existence today that
 is capable of strong RSI. A system capable of strong RSI must have broad
 abilities to deeply understand, reprogram and recompile its constituent
 parts before it can strongly recursively self improve, that is, before it
 can create improved versions of itself (potentially heavily modified
 versions that must demonstrate their superior fitness in a competitive
 environment) where the unique creations repeat the process to yield yet
 greater improvements ad infinitum.

 -dave
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread William Pearson
2008/8/29 Ben Goertzel [EMAIL PROTECTED]:

 About recursive self-improvement ... yes, I have thought a lot about it, but
 don't have time to write a huge discourse on it here

 One point is that if you have a system with N interconnected modules, you
 can approach RSI by having the system separately think about how to improve
 each module.  I.e. if there are modules A1, A2,..., AN ... then you can for
 instance hold A1,...,A(N-1) constant while you think about how to improve
 AN.  One can then iterate through all the modules and improve them in
 sequence.   (Note that the modules are then doing the improving of each
 other.)

I'm not sure what you are getting at here...

Is modification system implemented in a module (Ai)? If so how would
evaluate whether a modification Ai, call it AI' did a better job?

What I am trying to figure out is whether the system you are
describing could change to one which modules A1 to A10 were modified
twice as often as the other modules? Can it change itself so it could
remove a module altogether, or duplicate a module and specialise each
of the modules to a different purpose?

  Will Pearson


  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread Ben Goertzel
On Sat, Aug 30, 2008 at 10:06 AM, William Pearson [EMAIL PROTECTED]wrote:

 2008/8/29 Ben Goertzel [EMAIL PROTECTED]:
 
  About recursive self-improvement ... yes, I have thought a lot about it,
 but
  don't have time to write a huge discourse on it here
 
  One point is that if you have a system with N interconnected modules, you
  can approach RSI by having the system separately think about how to
 improve
  each module.  I.e. if there are modules A1, A2,..., AN ... then you can
 for
  instance hold A1,...,A(N-1) constant while you think about how to improve
  AN.  One can then iterate through all the modules and improve them in
  sequence.   (Note that the modules are then doing the improving of each
  other.)

 I'm not sure what you are getting at here...

 Is modification system implemented in a module (Ai)? If so how would
 evaluate whether a modification Ai, call it AI' did a better job?



The modification system is implemented in a module (subject to
modification), but this is a small module,
which does most of its work by calling on other AI modules (also subject to
modification)...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread William Pearson
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:


 On Sat, Aug 30, 2008 at 10:06 AM, William Pearson [EMAIL PROTECTED]
 wrote:

 2008/8/29 Ben Goertzel [EMAIL PROTECTED]:
 
  About recursive self-improvement ... yes, I have thought a lot about it,
  but
  don't have time to write a huge discourse on it here
 
  One point is that if you have a system with N interconnected modules,
  you
  can approach RSI by having the system separately think about how to
  improve
  each module.  I.e. if there are modules A1, A2,..., AN ... then you can
  for
  instance hold A1,...,A(N-1) constant while you think about how to
  improve
  AN.  One can then iterate through all the modules and improve them in
  sequence.   (Note that the modules are then doing the improving of each
  other.)

 I'm not sure what you are getting at here...

 Is modification system implemented in a module (Ai)? If so how would
 evaluate whether a modification Ai, call it AI' did a better job?

 The modification system is implemented in a module (subject to
 modification), but this is a small module,
 which does most of its work by calling on other AI modules (also subject to
 modification)...


Isn't it an evolutionary stable strategy for the modification system
module to change to a state where it does not change itself?1 Let me
give you a just so story and you can tell me whether you think it
likely. I'd be curious as to why you don't.

Let us say the AI is trying to learn a different language (say french
with its genders), so the system finds it is better to concentrate its
change on the language modules as these need the most updating. So a
modification to the modification module that completely concentrates
the modifications on the language module should be the best at that
time. But then it would be frozen forever and once the need to vary
the language module was past it wouldn't be able to go back to
modifying other modules. Short sighted I know, but I have yet to come
across an RSI system that isn't either short sighted or limited to
what it can prove.

  Will

1 Assuming there is no pressure on it for variation.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread Ben Goertzel


 Isn't it an evolutionary stable strategy for the modification system
 module to change to a state where it does not change itself?1



Not if the top-level goals are weighted toward long-term growth


 Let me
 give you a just so story and you can tell me whether you think it
 likely. I'd be curious as to why you don't.

 Let us say the AI is trying to learn a different language (say french
 with its genders), so the system finds it is better to concentrate its
 change on the language modules as these need the most updating. So a
 modification to the modification module that completely concentrates
 the modifications on the language module should be the best at that
 time. But then it would be frozen forever and once the need to vary
 the language module was past it wouldn't be able to go back to
 modifying other modules. Short sighted I know, but I have yet to come
 across an RSI system that isn't either short sighted or limited to
 what it can prove.


You seem to be assuming that subgoal alienation will occur, and the
long-term goal of dramatically increasing intelligence will be forgotten
in favor of the subgoal of improving NLP.  But I don't see why you
make this assumption; this seems an easy problem to avoid in a
rationally-designed AGI system, although not so easy in the context
of human psychology.

-- BenG



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread William Pearson
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:



 Isn't it an evolutionary stable strategy for the modification system
 module to change to a state where it does not change itself?1

 Not if the top-level goals are weighted toward long-term growth


 Let me
 give you a just so story and you can tell me whether you think it
 likely. I'd be curious as to why you don't.

 Let us say the AI is trying to learn a different language (say french
 with its genders), so the system finds it is better to concentrate its
 change on the language modules as these need the most updating. So a
 modification to the modification module that completely concentrates
 the modifications on the language module should be the best at that
 time. But then it would be frozen forever and once the need to vary
 the language module was past it wouldn't be able to go back to
 modifying other modules. Short sighted I know, but I have yet to come
 across an RSI system that isn't either short sighted or limited to
 what it can prove.

 You seem to be assuming that subgoal alienation will occur, and the
 long-term goal of dramatically increasing intelligence will be forgotten
 in favor of the subgoal of improving NLP.  But I don't see why you
 make this assumption; this seems an easy problem to avoid in a
 rationally-designed AGI system, although not so easy in the context
 of human psychology.

Have you implemented a long term growth goal atom yet? Don't they have
to specify a specific state? Or am I reading
http://opencog.org/wiki/OpenCogPrime:GoalAtom wrong?

Also do you have any information on how the top level goal will play a
part in assigning a fitness in Moses? How can you evaluate how good a
change to a module will be for long term growth, without allowing the
system to run for a long time and measure its growth?

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread Ben Goertzel


 Have you implemented a long term growth goal atom yet?


Nope, right now we're just playing with virtual puppies, who aren't
really explicitly concerned with long-term growth

(plus of course various narrow-AI-ish applications of OpenCog components...)


 Don't they have
 to specify a specific state? Or am I reading
 http://opencog.org/wiki/OpenCogPrime:GoalAtom wrong?



They don't have to specify a specific state.  A goal could
be some PredicateNode P expressing an abstract evaluation of
state, programmed in Combo (a general purpose programming
language)...





 Also do you have any information on how the top level goal will play a
 part in assigning a fitness in Moses?


That comes down to the basic triad

Context  Procedure == Goal

The aim of the Ai mind is to understand the context it's in, then learn
or select a procedure that it estimates (infers) will have a high
probability
of helping it achieve its goal in the relevant context.

MOSES is a procedure learning algorithm...

This is described in the chapter on goal-oriented cognition in the OCP
wikibook...




 How can you evaluate how good a
 change to a module will be for long term growth, without allowing the
 system to run for a long time and measure its growth?


By inference...

... at least, that's the theory ;-)

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread William Pearson
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:


 Don't they have
 to specify a specific state? Or am I reading
 http://opencog.org/wiki/OpenCogPrime:GoalAtom wrong?

 They don't have to specify a specific state.  A goal could
 be some PredicateNode P expressing an abstract evaluation of
 state, programmed in Combo (a general purpose programming
 language)...

So it could be a specific set of states? To specify long term growth
as a goal, wouldn't you need to be able to do an abstract evaluation
of how the state *changes* rather than just the current state?



 Also do you have any information on how the top level goal will play a
 part in assigning a fitness in Moses?

 That comes down to the basic triad

 Context  Procedure == Goal

 The aim of the Ai mind is to understand the context it's in, then learn
 or select a procedure that it estimates (infers) will have a high
 probability
 of helping it achieve its goal in the relevant context.

 MOSES is a procedure learning algorithm...

 This is described in the chapter on goal-oriented cognition in the OCP
 wikibook...


Searching for goal in the wikibook got me a whole lot of pages, none
of them with goal in the title. Is there any way to de-wiki the titles
so that a search for goal would pick up
http://opencog.org/wiki/OpenCogPrime:SchemaContextGoalTriad in its
title? Goal picks up way too many text searches.

I'll have a read of it.



 How can you evaluate how good a
 change to a module will be for long term growth, without allowing the
 system to run for a long time and measure its growth?

 By inference...

 ... at least, that's the theory ;-)


What are your expected false positive rates for classifying a change
to the modification module that leads to long term growth?

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Fwd: [agi] Re: Goedel machines ..PS

2008-08-30 Thread Ben Goertzel
***

So it could be a specific set of states? To specify long term growth
as a goal, wouldn't you need to be able to do an abstract evaluation
of how the state *changes* rather than just the current state?
***

yes, and of course a GroundedPredicateNode could do that too ... the system
can recall its prior states and time-stamp the memories...


 This is described in the chapter on goal-oriented cognition in the OCP
 wikibook...


Searching for goal in the wikibook got me a whole lot of pages, none
of them with goal in the title. Is there any way to de-wiki the titles
so that a search for goal would pick up
http://opencog.org/wiki/OpenCogPrime:SchemaContextGoalTriad in its
title? Goal picks up way too many text searches.


Look at the series of pages in the chapter

http://opencog.org/wiki/OpenCogPrime:WikiBook#Goal-Oriented_Cognition

thx
ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread David Hart
On 8/29/08, David Hart [EMAIL PROTECTED] wrote:


 The best we can hope for is that we participate in the construction and
 guidance of future AGIs such they they are able to, eventually, invent,
 perform and carefully guide RSI (and, of course, do so safely every single
 step of the way without exception).


I'm surprised that no one jumped on this this statement, because it begs the
question 'what is the granularity of a step?' (an action)

The lower limit for the granularity of an action could conceivably be a
single instruction in a quantum molecular assembly language, while the upper
limit could be 'throwing the switch' on an AGI that is known to contain
modifications outside of safety parameters.

If I grok Ben's PreservationOfGoals paper, one implication is that it's
desirable to figure out how to determine the maximum safe limit for the size
(granularity) of all actions such that no action is likely to break
maintenance of the system's goals (where presumably,
friendliness/helpfulness is one of potentially many goals under
maintenance). An AGI working within such a safety framework would experience
self-imposed constraints on its actions, to the degree that may of the
god-like AGI powers imagined in popular fiction may be provably
unconscionable.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Mike Tintner

  Dave Hart: MT:Sorry, I forgot to ask for what I most wanted to know - what 
form of RSI in any specific areas has been considered? 

  To quote Charles Babbage, I am not able rightly to apprehend the kind of 
confusion of ideas that could provoke such a question.

  The best we can hope for is that we participate in the construction and 
guidance of future AGIs such they they are able to, eventually, invent, perform 
and carefully guide RSI (and, of course, do so safely every single step of the 
way without exception).

  Dave,

  On the contrary, it's an important question. If an agent is to self-improve 
and keep self-improving, it has to start somewhere - in some domain of 
knowledge, or some technique/technology of problem-solving...or something. 
Maths perhaps or maths theorems.?Have you or anyone else ever thought about 
where, and how? (It sounds like the answer is, no).  RSI is for AGI a 
v.important concept - I'm just asking whether the concept has ever been 
examined with the slightest grounding in reality, or merely pursued as a 
logical conceit..

  The question is extremely important because as soon as you actually examine 
it, something v. important emerges - the systemic interconectedness of the 
whole of culture, and the whole of technology, and the whole of an individual's 
various bodies of knowledge, and you start to see why evolution of any kind in 
any area of biology or society, technology or culture is such a difficult and 
complicated business. RSI strikes me as a last-century, local-minded concept, 
not one of this century where we are becoming aware of the global 
interconnectedness and interdependence of all systems.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Ben Goertzel
About recursive self-improvement ... yes, I have thought a lot about it, but
don't have time to write a huge discourse on it here

One point is that if you have a system with N interconnected modules, you
can approach RSI by having the system separately think about how to improve
each module.  I.e. if there are modules A1, A2,..., AN ... then you can for
instance hold A1,...,A(N-1) constant while you think about how to improve
AN.  One can then iterate through all the modules and improve them in
sequence.   (Note that the modules are then doing the improving of each
other.)

What algorithms are used for the improving itself?

There is the evolutionary approach: to improve module AN, just make an
ensemble of M systems ... all of which have the same code for A1,...,A(N-1)
but different code for AN.   Then evolve this ensemble of varying artificial
minds using GP or MOSES or some such.

And then there is the probabilistic logic approach: seek rigorous
probability bounds of the odds that system goals will be better fulfilled if
AN is replaced by some candidate replacement AN'.

All this requires that the system's modules be represented in some language
that is easily comprehensible to (hence tractably modifiable by) the system
itself.  OpenCog doesn't take this approach explicitly right now, but we
know how to make it do so.  Simply make MindAgents in LISP or Combo rather
than C++.  There's no strong reason not to do this ... except that Combo is
slow right now (recently benchmarked at 1/3 the speed of Lua), and we
haven't dealt with the foreign-function interface stuff needed to plug in
LISP MindAgents (but that's probably not extremely hard).   We have done
some experiments before expressing, for instance, a simplistic PLN deduction
MindAgent in Combo.

In short the OpenCogPrime architecture explicitly supports a tractable path
to recursive self-modification.

But, notably, one would have to specifically switch this feature on --
it's not going to start doing RSI unbeknownst to us programmers.

And the problem of predicting where the trajectory of RSI will end up is a
different one ... I've been working on some theory in that regard (and will
post something on the topic w/ in the next couple weeks) but it's still
fairly speculative...

-- Ben G

On Fri, Aug 29, 2008 at 6:59 AM, Mike Tintner [EMAIL PROTECTED]wrote:



 Dave Hart: MT:Sorry, I forgot to ask for what I most wanted to know - what
 form of RSI in any specific areas has been considered?

 To quote Charles Babbage, I am not able rightly to apprehend the kind of
 confusion of ideas that could provoke such a question.

 The best we can hope for is that we participate in the construction and
 guidance of future AGIs such they they are able to, eventually, invent,
 perform and carefully guide RSI (and, of course, do so safely every single
 step of the way without exception).
 Dave,

 On the contrary, it's an important question. If an agent is to self-improve
 and keep self-improving, it has to start somewhere - in some domain of
 knowledge, or some technique/technology of problem-solving...or something.
 Maths perhaps or maths theorems.?Have you or anyone else ever thought about
 where, and how? (It sounds like the answer is, no).  RSI is for AGI a
 v.important concept - I'm just asking whether the concept has ever been
 examined with the slightest grounding in reality, or merely pursued as a
 logical conceit..

 The question is extremely important because as soon as you actually examine
 it, something v. important emerges - the systemic interconectedness of the
 whole of culture, and the whole of technology, and the whole of an
 individual's various bodies of knowledge, and you start to see why evolution
 of any kind in any area of biology or society, technology or culture is such
 a difficult and complicated business. RSI strikes me as a last-century,
 local-minded concept, not one of this century where we are becoming aware of
 the global interconnectedness and interdependence of all systems.

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Mike Tintner
Ben,

It looks like what you've thought about is aspects of the information 
processing side of RSI but not the knowledge side. IOW you have thought about 
the technical side but not abouthow you progress from one domain of knowledge 
about the world to another, or from one subdomain to another. That's  the 
problem of general intelligence which, remember, is all about crossing domains.

The world ( knowledge about the world) are not homoarchic but heterarchic. The 
fact that you know about physics doesn't mean you can automatically learn about 
chemistry and then about biology. Each substantive and knowledge domain has its 
own rules and character. This is what emergence and evolution refer to. Even 
each branch/subdomain of maths and logic (and most domains)  has its own rules 
and character.

And all these different domains have not only to be learned to some extent 
separately and distinctively, but integrated with each other. Hence it is that 
science is shot through with paradigms, as we try to integrate new unfamiliar 
domains with old familiar ones. And those paradigms, like the solar system for 
atomic physics, involve analogy and metaphor. This, to repeat, is the central 
problem of GI which can be defined as creative generalization - which no one in 
AGI has yet offered, (or, let's be honest, has), an idea how to solve. 

Clearly, integrating new domains is a complicated and creative and not simply a 
mathematical or recursive business. Hence it is in part that people are so 
resistant to learning new domains. You may have noticed that AGI-ers are 
staggeringly resistant to learning new domains. They only want to learn certain 
kinds of representation and not others - principally maths/logic/language  
programming - despite the fact that human culture offers scores of other 
kinds.,. They only deal with certain kinds of problems, (related to the 
previous domains), despite the fact that culture and human life  include a vast 
diversity of other problems. In this, they are fairly typical of the human race 
- everyone has resistance to learning new domains, just as organizations have 
strong resistance to joining up with other  kinds of organizations. (But 
AGI-ers who are supposed to believe in *General* Intelligence should be at 
least aware and ashamed of their narrowness).

Before you can talk about RSI, you really have to understand these problems of 
crossing and integrating domains (and why people are so resistant - they're not 
just being stupid or prejudiced). And you have to have a global picture of both 
the world of knowledge and the world-to-be-known.  Nobody in AGI does.

If RSI were possible, then you should see some signs of it within human 
society, of humans recursively self-improving - at however small a scale. You 
don't because of this problem of crossing and integrating domains. It can all 
be done, but laboriously and stumblingly not in some simple, formulaic way. 
That is culturally a very naive idea.

Even within your own sphere of information technology, I am confident that RSI, 
even if it were for argument's sake possible, would present massive problems of 
having to develop new kinds of software, machine  organization to cope with 
the information and hierarchical explosion  - and still interface with other 
existing and continuously changing technologies .



  Ben:About recursive self-improvement ... yes, I have thought a lot about it, 
but don't have time to write a huge discourse on it here

  One point is that if you have a system with N interconnected modules, you can 
approach RSI by having the system separately think about how to improve each 
module.  I.e. if there are modules A1, A2,..., AN ... then you can for instance 
hold A1,...,A(N-1) constant while you think about how to improve AN.  One can 
then iterate through all the modules and improve them in sequence.   (Note that 
the modules are then doing the improving of each other.)

  What algorithms are used for the improving itself?

  There is the evolutionary approach: to improve module AN, just make an 
ensemble of M systems ... all of which have the same code for A1,...,A(N-1) but 
different code for AN.   Then evolve this ensemble of varying artificial minds 
using GP or MOSES or some such.

  And then there is the probabilistic logic approach: seek rigorous probability 
bounds of the odds that system goals will be better fulfilled if AN is replaced 
by some candidate replacement AN'.

  All this requires that the system's modules be represented in some language 
that is easily comprehensible to (hence tractably modifiable by) the system 
itself.  OpenCog doesn't take this approach explicitly right now, but we know 
how to make it do so.  Simply make MindAgents in LISP or Combo rather than C++. 
 There's no strong reason not to do this ... except that Combo is slow right 
now (recently benchmarked at 1/3 the speed of Lua), and we haven't dealt with 
the foreign-function interface stuff needed to 

Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Ben Goertzel

On Fri, Aug 29, 2008 at 6:53 PM, Mike Tintner [EMAIL PROTECTED]wrote:
Ben,

It looks like what you've thought about is aspects of the information
processing side of RSI but not the knowledge side. IOW you have thought
about the technical side but not abouthow you progress from one domain of
knowledge about the world to another, or from one subdomain to
another. That's  the problem of general intelligence which, remember, is all
about crossing domains.


Hmmm... it is odd that you make judgments regarding what I have or have not
*thought* about, based on what I choose to write in a brief email.  My goal
in writing emails on this list is not to completely disburse myself of all
my relevant thoughts ... if I did that, I would not have time to do anything
all day but write emails to this list ;-) ... and of course I still would
fail ... these are complex matters and there's a lot to say...

yes, in that email i described the formal process of RSI and not the general
world-knowledge that an AGI will need in order to effectively perform RSI.

Before rewriting its own code substantially, an AGI will need to get a lot
of practice writing simpler code carrying out a variety of tasks in a
variety of contexts related to the system's own behavior...

But this should naturally happen.  For instance if an AGI needs to learn new
inference control heuristics and inference formulas, that is a sort of
preliminary step to learning new inference algorithms ... which is a
preliminary step to learning new kinds of cognition ... etc.   One can
articulate a series of steps toward progressively greater and greater
self-modification ...

But yes, each of these steps will require diverse knowledge ... but the
gaining of this knowledge is mostly not about RSI particularly, but rather
just part of one's overall AGI architecture ... intelligence as you say
being all about knowledge gathering, maintenance, creation and enaction



 Before you can talk about RSI, you really have to understand these problems
 of crossing and integrating domains (and why people are so resistant -
 they're not just being stupid or prejudiced). And you have to have a global
 picture of both the world of knowledge and the world-to-be-known.  Nobody in
 AGI does.


I think I do.  You think I don't.  Oh well.



 If RSI were possible, then you should see some signs of it within human
 society, of humans recursively self-improving - at however small a scale.
 You don't because of this problem of crossing and integrating domains. It
 can all be done, but laboriously and stumblingly not in some simple,
 formulaic way. That is culturally a very naive idea.


Similarly, if space travel were possible, humans would be flying around
unaided by technology from planet to planet, and star to star ;-p

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Matt Mahoney
Mike Tintner wrote:
You may have noticed that AGI-ers are staggeringly resistant to learning new 
domains. 

Remember you are dealing with human brains. You can only write into long term 
memory at a rate of 2 bits per second. :-)

AGI spans just about every field of science, from ethics to quantum mechanics, 
child development to algorithmic information theory, genetics to economics.

 -- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Mike Tintner


Matt: AGI spans just about every field of science, from ethics to quantum 
mechanics, child development to algorithmic information theory, genetics to 
economics.


Just so. And every field of the arts. And history. And philosophy. And 
technology. Including social technology. And organizational technology. And 
personal technology. And the physical technologies of sport, dance, sex 
etc. The whole of culture and the world.


No, nobody can be a superDa Vinci knowing everything and solving every 
problem. But actually every AGI-er will have personal experience of solving 
problems in many different domains as well as their professional ones. And 
they should, I suggest, be able to use and integrate that experience into 
AGI. They should be able to metacognitively relate, say, the problem of 
tidying and organizing a room, to the problem of organizing an argument in 
an essay, to the problem of creating an AGI organization, to the problem of 
organizing an investment portfolio, to the problem of organizing a soccer 
team  - because that is the business and problem of AGI. Crossing and 
integrating domains. Any and all domains. There should be a truly general 
culture. What I see is actually a narrow culture, (even if AGI-ers are much 
more broadly educated than most),  that only discusses a very limited set of 
problems, which are, in the final analysis, hard to distinguish from those 
of narrow AI - and a culture which refuses to consider any problems outside 
its intellectual/ professional comfort zone,





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Re: Goedel machines ..PS

2008-08-28 Thread Mike Tintner
Sorry, I forgot to ask for what I most wanted to know - what form of RSI in 
any specific areas has been considered? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-28 Thread David Hart
On 8/29/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Sorry, I forgot to ask for what I most wanted to know - what form of RSI in
 any specific areas has been considered?


To quote Charles Babbage, I am not able rightly to apprehend the kind of
confusion of ideas that could provoke such a question.

The best we can hope for is that we participate in the construction and
guidance of future AGIs such they they are able to, eventually, invent,
perform and carefully guide RSI (and, of course, do so safely every single
step of the way without exception).

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com