Ben,

It looks like what you've thought about is aspects of the information 
processing side of RSI but not the knowledge side. IOW you have thought about 
the technical side but not abouthow you progress from one domain of knowledge 
about the world to another, or from one subdomain to another. That's  the 
problem of general intelligence which, remember, is all about crossing domains.

The world (& knowledge about the world) are not homoarchic but heterarchic. The 
fact that you know about physics doesn't mean you can automatically learn about 
chemistry and then about biology. Each substantive and knowledge domain has its 
own rules and character. This is what emergence and evolution refer to. Even 
each branch/subdomain of maths and logic (and most domains)  has its own rules 
and character.

And all these different domains have not only to be learned to some extent 
separately and distinctively, but integrated with each other. Hence it is that 
science is shot through with paradigms, as we try to integrate new unfamiliar 
domains with old familiar ones. And those paradigms, like the solar system for 
atomic physics, involve analogy and metaphor. This, to repeat, is the central 
problem of GI which can be defined as creative generalization - which no one in 
AGI has yet offered, (or, let's be honest, has), an idea how to solve. 

Clearly, integrating new domains is a complicated and creative and not simply a 
mathematical or recursive business. Hence it is in part that people are so 
resistant to learning new domains. You may have noticed that AGI-ers are 
staggeringly resistant to learning new domains. They only want to learn certain 
kinds of representation and not others - principally maths/logic/language & 
programming - despite the fact that human culture offers scores of other 
kinds.,. They only deal with certain kinds of problems, (related to the 
previous domains), despite the fact that culture and human life  include a vast 
diversity of other problems. In this, they are fairly typical of the human race 
- everyone has resistance to learning new domains, just as organizations have 
strong resistance to joining up with other  kinds of organizations. (But 
AGI-ers who are supposed to believe in *General* Intelligence should be at 
least aware and ashamed of their narrowness).

Before you can talk about RSI, you really have to understand these problems of 
crossing and integrating domains (and why people are so resistant - they're not 
just being stupid or prejudiced). And you have to have a global picture of both 
the world of knowledge and the world-to-be-known.  Nobody in AGI does.

If RSI were possible, then you should see some signs of it within human 
society, of humans recursively self-improving - at however small a scale. You 
don't because of this problem of crossing and integrating domains. It can all 
be done, but laboriously and stumblingly not in some simple, formulaic way. 
That is culturally a very naive idea.

Even within your own sphere of information technology, I am confident that RSI, 
even if it were for argument's sake possible, would present massive problems of 
having to develop new kinds of software, machine & organization to cope with 
the information and hierarchical explosion  - and still interface with other 
existing and continuously changing technologies .



  Ben:About recursive self-improvement ... yes, I have thought a lot about it, 
but don't have time to write a huge discourse on it here

  One point is that if you have a system with N interconnected modules, you can 
approach RSI by having the system separately think about how to improve each 
module.  I.e. if there are modules A1, A2,..., AN ... then you can for instance 
hold A1,...,A(N-1) constant while you think about how to improve AN.  One can 
then iterate through all the modules and improve them in sequence.   (Note that 
the modules are then doing the improving of each other.)

  What algorithms are used for the improving itself?

  There is the evolutionary approach: to improve module AN, just make an 
ensemble of M systems ... all of which have the same code for A1,...,A(N-1) but 
different code for AN.   Then evolve this ensemble of varying artificial minds 
using GP or MOSES or some such.

  And then there is the probabilistic logic approach: seek rigorous probability 
bounds of the odds that system goals will be better fulfilled if AN is replaced 
by some candidate replacement AN'.

  All this requires that the system's modules be represented in some language 
that is easily comprehensible to (hence tractably modifiable by) the system 
itself.  OpenCog doesn't take this approach explicitly right now, but we know 
how to make it do so.  Simply make MindAgents in LISP or Combo rather than C++. 
 There's no strong reason not to do this ... except that Combo is slow right 
now (recently benchmarked at 1/3 the speed of Lua), and we haven't dealt with 
the foreign-function interface stuff needed to plug in LISP MindAgents (but 
that's probably not extremely hard).   We have done some experiments before 
expressing, for instance, a simplistic PLN deduction MindAgent in Combo.

  In short the OpenCogPrime architecture explicitly supports a tractable path 
to recursive self-modification.

  But, notably, one would have to specifically "switch this feature on" -- it's 
not going to start doing RSI unbeknownst to us programmers.

  And the problem of predicting where the trajectory of RSI will end up is a 
different one ... I've been working on some theory in that regard (and will 
post something on the topic w/ in the next couple weeks) but it's still fairly 
speculative...

  -- Ben G


  On Fri, Aug 29, 2008 at 6:59 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:


      Dave Hart: MT:Sorry, I forgot to ask for what I most wanted to know - 
what form of RSI in any specific areas has been considered? 

      To quote Charles Babbage, I am not able rightly to apprehend the kind of 
confusion of ideas that could provoke such a question.

      The best we can hope for is that we participate in the construction and 
guidance of future AGIs such they they are able to, eventually, invent, perform 
and carefully guide RSI (and, of course, do so safely every single step of the 
way without exception).

      Dave,

      On the contrary, it's an important question. If an agent is to 
self-improve and keep self-improving, it has to start somewhere - in some 
domain of knowledge, or some technique/technology of problem-solving...or 
something. Maths perhaps or maths theorems.?Have you or anyone else ever 
thought about where, and how? (It sounds like the answer is, no).  RSI is for 
AGI a v.important concept - I'm just asking whether the concept has ever been 
examined with the slightest grounding in reality, or merely pursued as a 
logical conceit..

      The question is extremely important because as soon as you actually 
examine it, something v. important emerges - the systemic interconectedness of 
the whole of culture, and the whole of technology, and the whole of an 
individual's various bodies of knowledge, and you start to see why evolution of 
any kind in any area of biology or society, technology or culture is such a 
difficult and complicated business. RSI strikes me as a last-century, 
local-minded concept, not one of this century where we are becoming aware of 
the global interconnectedness and interdependence of all systems.

----------------------------------------------------------------------------
          agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "Nothing will ever be attempted if all possible objections must be first 
overcome " - Dr Samuel Johnson




------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to