Ahah!  Now I get it . . . . 

Interesting . . . . In a lot of ways, this is actually a (relatively minor) 
variant on what I'm always arguing with Ben when I keep insisting that more 
levels and encapsulation and modularity need to be added to the design of 
Novamente.  He does seem to believe that if you start with the bottom-most 
level and tweak it enough that the system will miraculously self-assemble 
itself all the way up to a working intelligence -- whereas I don't see that as 
happening by itself unless you started with a small, very limited system and 
ran it through three billion years of evolution.

One of the ways in which I would get around the complexity problem is to 
decompose the intelligence into separate modules and develop each separately 
(i.e. don't insist that the same low level neurons with the same parameters be 
the basis of everything from olfaction to vision to language).

Another way would be to realize that there isn't just one local to global 
disconnect but that there are, at a minimum, several layers of local to global 
disconnect -- BUT, if you recognize this, each level is far more limited in 
terms of the damage that it can do to the stability of the system as a whole 
since the reach of the disconnect is far less and you're only dealing with one 
level of complexity instead of complexity-squared or -cubed (which is relevant 
because I believe that the complexity is *REASONABLY* limited at each level and 
only becomes totally intractable if you're dealing with multiple levels at the 
same time -- AND not even realizing it).

= = = = = = = 

Also, THANK YOU for the braided-rings example.  I may end up using it in a 
paper I'm working on.

Another interesting thing that I realize is that my whole approach to 
Friendliness is utilizing exactly the approach that you recommend -- starting 
with *known* functioning systems developed by evolution and assuming that they 
are stable, reinforceable attractors since . . . . well, evolution has already 
PROVEN that they are.

Another way to rephrase a part of your argument might also be to say that many 
AGI system developers are *still* making the same "blank slate" error that 
psychologist and linguists are finally starting to get past.

Another way to rephrase another part is to say that many AGI system developers 
are expecting to dump all the components of a ruptured cell into water and 
expect them to re-assemble themselves into a functioning cell.  It just isn't 
going to happen.  Somehow, scaffolding needs to be built without horribly 
stepping all over the grounding problem -- and I think that the only way that 
this can be done is to start with a rather small seed that then self-assembles. 
 I DO NOT believe that systems that assume infinities or looking across vast 
corpuses to create statistics have *any* chance of working in this way.

Also, I believe that, by using good design, it is possible to decouple 
individual systems far more than evolution can.  As is evident from genetics, 
evolution spaghetti-codes and even when cells (or other levels) do 
differentiate, there is still so such a high percentage of what is held in 
common that there is very little attack surface to generate further differences 
(which is emphatically not true in object oriented design).  I believe that an 
engineered mind is going to be *A LOT* less complex than an evolved mind in 
exactly the same way that engineered cells are going to be a lot less complex 
than natural ones.

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to