Well I can spend a lot of time replying this since it is a tough subject.
The CB system is a good example my thinking doesn't involve CB's yet so the
"organized mayhem" would be of a different form and I was thinking of the
complexity being integrated differently.

What you are saying makes sense in terms of evolution finding the right
combination. The reliance on the complexity, yes sure, possible. What I
think of this system you describe is like if you design a complicated
electronic circuit with much theory but little hands-on experience you run
into complexity issues from component value deviations and environmental
factors that need to be tamed and filtered out before your theoretical
electronic emergence comes to life. In that case the result is highly
dependent on the interoperating components clean design. BUT there are some
circuits I believe, can't think of any offhand, where the opposite is true.
It just kind of works based on based on complex subsystems interoperational
functionality and it was discovered, not designed intentionally.

If the CS problem is such that you describe then there is a serious
obstacle. I personally think that getting close to the human brain isn't
going to do it. A monkey brain is close. Can we get closer with a
simulation? Also I think there are other designs that Earth evolution just
didn't get. Those others designs may have the complex reliance.

Building a complex based intelligence much different from the human brain
design but still basically dependant on complexity is not impossible just
formidable. Working with software systems that have designed complexity and
getting predicted emergence and in this case cognition, well that is
something that takes special talent. We have tools now that nature and
evolution didn't have. We understand things through collective knowledge
accumulated over time. It can be more than trial and error. And the existing
trial and error can be narrowed down.

The part that I wonder about is why this complex ingredient is there (if it
is). Is it because of the complexity spectrum inherent in nature? Is it
fully non-understandable, can it be derived based on nature's complexity
structure? Or is there such a computational resource barrier that it is just
prohibitively inefficient to calculate. Or are we perhaps using the wrong
mathematics to try to understand it? Can it be estimated and does it
converge to anything we know of or is it just so randomish and exact.

I feel though that the human brain had to evolve though that messy data
space of nature and what we have is a momentary semi-reflection of that
historical environmental complexity. So our form of intelligence is somewhat
optimized for that. And if you take an intersecting subset with other
theoretical forms of intelligence would the complexity properties somehow
correlate or are they highly dependent on the environment of the evolution?
Or does our atomic based universe define what that evolutionary cognitive
complexity dependency is. I suppose that is the basis of arguments for or
against. 

John


> From: Richard Loosemore [mailto:[EMAIL PROTECTED]
> 
> There has always been a lot of confusion about what exactly I mean by
> the "complex systems problem" (CSP), so let me try, once again, to give
> a quick example of how it could have an impact on AGI, rather than what
> the argument is.
> 
> (One thing to bear in mind is that the complex systems problem is about
> how researchers and engineers should go about building an AGI.  The
> whole point of the CSP is to say that IF intelligent systems are of a
> certain sort, THEN it will be impossible to build intelligent systems
> using today's methodology).
> 
> What I am going to do is give an example of how the CSP might make an
> impact on intelligent systems.  This is only a made-up example, so try
> to see it is as just an illustration.
> 
> Suppose that when evolution was trying to make improvements to the
> design of simple nervous systems, it hit upon the idea of using
> mechanisms that I will call "concept-builder" units, or CB units.  The
> simplest way to understand the CB units is to say that each one is
> forever locked into a peculiar kind of battle with the other units.  The
> CBs spend a lot of energy engaging in the battle with other CB units,
> but they also sometimes do other things, like fall asleep (in fact, most
> of them are asleep at any given moment), or have babies (they spawn new
> CB units) and sometimes they decide to lock onto a small cluster of
> other CB units and become obsessed with what those other CBs are doing.
> 
> So you should get the idea that these CB units take part in what can
> only be described as "organized mayhem".
> 
> Now, if we were able to look inside a CB system and see what the CBs are
> doing [Note:  we can do this, to a limited extent:  it is called
> "introspection"], we would notice many aspects of CB behavior that were
> quite regular and sensible.  We would say, for example, that the CB
> units appear to be representing concepts like [chair] and [upside-down]
> and [desperation], and we would also say that when some CB units have
> babies, it looks rather like a couple of existing concepts being
> combined to form a new concept.
> 
> In fact, we might notice so many regular, ordered, understandable things
> happening in the CB-system that we would start to believe that the CB
> units were not engaging in what I just called "organized mayhem" at all!
>   We might say that the whole thing was pretty comprehensible and
> ordered.
> 
> In fact, we might be tempted to try to build a version of the system in
> which the behaviors were tidied up and cleaned  -  a system in which the
> 'meaning' of each CB unit was precisely defined, and in which the
> building of new CBs always proceeded in a very precise, understandable
> way.  And then, after we started our project to build a "cleaned-up"
> version of a CB system, we would say that all we were doing was
> eliminating a lot of wasteful noise and inefficiency in the original CB
> system that was built by evolution.
> 
> But now, here is a little problem that we have to deal with.  It turns
> out that the CB system built by evolution was functioning *because* of
> all that chaotic, organized mayhem, *not* in spite of it.  It was not
> really a nice, organized, understandable mechanism plus a bit of noise
> and wastfulness ..... it was a mechanism whose proper functioning
> absolutely depended on a proper balance of those fighting CB units.  In
> fact, the overall intelligence of the system would drop like a stone if
> some of those mechanisms were taken away.  It was like an ecology:  all
> the competing species are in perfect balance, not because they are
> cooperating so that everyone gets the resources they need, but because
> nobody is cooperating with anyone else at all.
> 
> Now, here comes a crucial idea that many people seem to miss.  The CB
> system would not have to be as bad as an ecology, with intelligence
> "emerging" suddenly out of complete randomness.  I would not expect the
> situation to be nearly as drastic as that.  Instead, it could be the
> case that the CB system is 99% understandable, but with just a 1% part
> that appears to serve no meaningful purpose.  Really:  just a small
> touch of random, incomprehensible stuff embedded in what otherwise seems
> to be a fairly logical system.
> 
> But so what?, you might ask.  We can just go ahead and build a "cleaned
> up" version of a CB system, and when we get it mostly working, we just
> look at what else we need to add to it, to get that extra ingredient of
> random, incomprehensible stuff that nature seems to have included in its
> design.  One way or another, you might think, we find our own substitute
> for that last 1%:  we could figure out how the last 1% actually has an
> effect on the natural system, and having understood it, we build an
> equivalent for our system.  Or, we just keep tweaking some parameters
> until we get there.
> 
> Sorry:  not going to work.  This is why the CSP is so serious.  If you
> start by building a cleaned-up version of the system, and then try to
> discover those last few mechanisms, you will find that there is no
> relationship between what you want those mechanisms to do for the
> system, and what the mechanisms actually look like.  Repeat:  no
> relationship at all.  You might as well start by opening up God's
> Compleat Catalogue of All Possible Mechanisms, then work your way
> through the book from "Aaaaaaaaaaaaaabbbcceddghjduy" on page 1 to
> "Zzzzzzzzzzzzzzzzyxyshzsusido" on page Googolplex, because any one of
> those mechanisms might be the one that does the trick.
> 
> Even worse, if you start by building your own version of the CB system,
> there might not be any extra mechanisms you can add to it to get it up
> to the same level of performance as nature's CB system.
> 
> Those last two paragraphs can be summarized as follows.  Evolution
> explored the space of possible intelligent mechanisms.  In the course of
> doing so, it discovered a class of systems that work, but it may well be
> that the ONLY systems in the whole universe that can function as well as
> a human intelligence involve a small percentage of weirdness that just
> balances out to make the system work.  There may be no cleaned-up
> versions that work.
> 
> That leaves you, the would-be system builder with these choices:
> 
> (1) Build your own cleaned-up, rationalized version anyway, and hope
> that it will work without any complex-weirdness.  If the above
> circumstances are the real situation, this will never work.
> 
> (2) Build your own cleaned-up, rationalized version anyway, and then try
> to find out what extra complex-weirdness you have to add to it to make
> it work.  If the above circumstances are the real situation, this will
> could conceivably work, but you will have to go through God's Compleat
> Catalogue of All Possible Mechanisms in order to do it:  you could
> therefore need a computer the size of a planet and spend the same amount
> of time that evolution spent on the problem.  It would probably not be
> quite as long as that, but even if it were one billionth the effort, it
> would still take forever.
> 
> (3) Build a version of the CB system that is as close as possible to the
> human cognitive system, because we know that evolution has already done
> the heavy lifting.  If the above circumstances are the real situation,
> this could work, provided we can get close enough to the human design to
> close the gap with some jiggling of parameters.
> 
> Option 1 is what many AI researchers have been doing, and continue to
> do.  When you tell these people that they should take some notice of the
> complex systems issue, they don't simply disagree with you, they start
> foaming at the mouth and screaming "fraud!" or "crackpot!".  Eliezer
> Yudkowsky is the paradigm case of this reaction.
> 
> Option 2 is what some more enlightened AGI researchers are doing.  Ben
> is in this group.
> 
> Now the final conclusions.
> 
> A)  Please be clear about one thing.  This argument is about risks, not
> about certainties.  I am not saying "All intelligent systems are
> DEFINITELY complex systems", I am asking "What is the chance that they
> are complex?".  I am not saying "The impact of complexity would
> DEFINITELY be catastrophic", I am asking "What are the risks of failure,
> if we ignore the problem and it turns out to be real?".  My conclusion
> is that the risks are so high that we should assume the worst.
> 
> B)  One difficulty with all of this is that when you look into the
> detailed argument you find that it will never be possible to ascertain
> the exact risk of this scenario being real.  Whatever decision you make,
> you will have to make it blind.  No waiting around to find out for sure
> that there is a problem.
> 
> C)  Lastly, if you look at the what you would expect to happen if the
> CSP is a real problem, you will notice something interesting:  the
> pattern of failure we have seen over the last fifty years in AI is
> exactly what we would have expected if the CSP was indeed as real as I
> think it is.
> 
> 
> 
> 
> Richard Loosemore
> 



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to