There is no doubt that complexity, in the sense typically used in
dynamical-systems-theory, presents a major issue for AGI systems.  Any
AGI system with real potential is bound to have a lot of parameters
with complex interdependencies between them, and tuning these
parameters is going to be a major problem.  The question is whether
one has an adequate theory of one's system to allow one to do this
without an intractable amount of trial and error.  Loosemore -- if I
interpret him correctly -- seems to be suggesting that for powerful
AGI systems no such theory can exist, on principle.  I doubt very much
this is correct.

-- Ben G

On Dec 6, 2007 9:40 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Jean-Paul,
>
> Although complexity is one of the areas associated with AI where I have less
> knowledge than many on the list, I was aware of the general distinction you
> are making.
>
> What I was pointing out in my email to Richard Loosemore what that the
> definitions in his paper "Complex Systems, Artificial Intelligence and
> Theoretical Psychology," for "irreducible computability" and "global-local
> interconnect" themselves are not totally clear about this distinction, and
> as a result, when Richard says that those two issues are an unavoidable part
> of AGI design that must be much more deeply understood before AGI can
> advance, by the more loose definitions which would cover the types of
> complexity involved in large matrix calculations and the design of a massive
> supercomputer, of course those issues would arise in AGI design, but its no
> big deal because we have a long history of dealing with them.
>
> But in my email to Richard I said I was assuming he was not using this more
> loose definitions of these words, because if he were, they would not present
> the unexpected difficulties of the type he has been predicting.  I said I
> though he was dealing with more the potentially unruly type of complexity, I
> assume you were talking about.
>
> I am aware of that type of complexity being a potential problem, but I have
> designed my system to hopefully control it.  A modern-day well functioning
> economy is complex (people at the Santa Fe Institute often cite economies as
> examples of complex systems), but it is often amazingly unchaotic
> considering how loosely it is organized and how many individual entities it
> has in it, and how many transitions it is constantly undergoing.  Unsually,
> unless something bangs on it hard (such as having the price of a major
> commodity all of a sudden triple), it has a fair amount of stability, while
> constantly creating new winners and losers (which is a productive form of
> mini-chaos).  Of course in the absence of regulation it is naturally prone
> to boom and bust cycles.
>
> So the system would need regulation.
>
> Most of my system operates on a message passing system with little concern
> for synchronization, it does not require low latencies, most of its units,
> operate under fairly similar code.  But hopefully when you get it all
> working together it will be fairly dynamic, but that dynamism with be under
> multiple controls.
>
> I think we are going to have to get such systems up and running to find you
> just how hard or easy they will be to control, which I acknowledged in my
> email to Richard.  I think that once we do we will be in a much better
> position to think about what is needed to control them.  I believe such
> control will be one of the major intellectual challenges to getting AGI to
> function at a human-level.  This issue is not only preventing runaway
> conditions, it is optimizing the intelligence of the inferencing, which I
> think will be even more import and diffiducle.  (There are all sorts of
> damping mechanisms and selective biasing mechanism that should be able to
> prevent many types of chaotic behaviors.)  But I am quite confident with
> multiple teams working on it, these control problems could be largely
> overcome in several years, with the systems themselves doing most of the
> learning.
>
> Even a little OpenCog AGI on a PC, could be interesting first indication of
> the extent to which complexity will present control problems.  As I said if
> you had 3G of ram for representation, that should allow about 50 million
> atoms.  Over time you would probably end up with at least hundreds of
> thousand of complex patterns, and it would be interesting to see how easy it
> would be to properly control them, and get them to work together as a
> properly functioning thought economy in what ever small interactive world
> they developed their self-organizing pattern base.  Of course on such a PC
> based system you would only, on average, be able to do about 10million
> pattern to pattern activations a second, so you would be talking about a
> fairly trivial system, but with say 100K patterns, it would be a good first
> indication of how easy or hard agi systems will be to control.
>
> Ed Porter
>
> -----Original Message-----
> From: Jean-Paul Van Belle [mailto:[EMAIL PROTECTED]
> Sent: Thursday, December 06, 2007 1:34 AM
> To: agi@v2.listbox.com
>
> Subject: RE: [agi] None of you seem to be able ...
>
> Hi Ed
>
> You seem to have missed what many A(G)I people (Ben, Richard, etc.) mean by
> 'complexity' (as opposed to the common usage of complex meaning difficult).
> It is not the *number* of calculations or interconnects that gives rise to
> complexity or chaos, but their nature. E.g. calculating the eigen-values of
> a n=10^10000 matrix is *very* difficult but not complex. So the large matrix
> calculations, map-reduces or BleuGene configuration are very simple. A
> map-reduce or matrix calculation is typically one line of code (at least in
> Python - which is where Google probably gets the idea from :)
>
> To make them complex, you need to go beyond.
> E.g. a 500K-node 3 layer neural network is simplistic (not simple:),
> chaining only 10K NNs together (each with 10K input/outputs) in a random
> network (with only a few of these NNs serving as input or output modules)
> would produce complex behaviour, especially if for each iteration, the input
> vector changes dynamically. Note that the latter has FAR FEWER interconnects
> i.e. would need much fewer calculations but its behaviour would be
> impossible to predict (you can only simulate it) whereas the behaviour of
> the 500K is much more easily understood.
> BlueGene has a simple architecture, a network of computers who do mainly the
> same thing (e.g the GooglePlex) has predictive behaviour, however if each
> computer acts/behaves very differently (I guess on the internet we could
> classify users into a number of distinct agent-like behaviours), you'll get
> complex behaviour. It's the difference in complexity between a 8Gbit RAM
> chip and say an old P3 CPU chip. The latter has less than one-hundredth of
> the transistors but is far more complex and displays interesting behaviour,
> the former doesn't.
>
> Jean-Paul
> >>> On 2007/12/05 at 23:12, in message
> <[EMAIL PROTECTED]>,
> "Ed Porter" <[EMAIL PROTECTED]> wrote:
> >       Yes, my vision of a human AGI would be a very complex machine.  Yes,
> > a lot of its outputs could only be made with human level reasonableness
> > after a very large amount of computation.  I know of no shortcuts around
> the
> > need to do such complex computation.  So it arguably falls in to what you
> > say Wolfram calls "computational irreducibility."
> >       But the same could be said for any of many types of computations,
> > such as large matrix equations or Google's map-reduces, which are
> routinely
> > performed on supercomputers.
> >       So if that is how you define irreducibility, its not that big a
> > deal.  It just means you have to do a lot of computing to get an answer,
> > which I have assumed all along for AGI (Remember I am the one pushing for
> > breaking the small hardware mindset.)  But it doesn't mean we don't know
> how
> > to do such computing or that we have to do a lot more complexity research,
> > of the type suggested in your paper, before we can successfully designing
> > AGIs.
> [...]
> >       Although it is easy to design system where the systems behavior
> > would be sufficiently chaotic that such design would be impossible, it
> seems
> > likely that it is also possible to design complex system in which the
> > behavior is not so chaotic or unpredictable.  Take the internet.
> Something
> > like 10^8 computers talk to each other, and in general it works as
> designed.
> > Take IBM's supercomputer BlueGene L, 64K dual core processor computer each
> > with at least 256MBytes all capable of receiving and passing messages at
> > 4Ghz on each of over 3 dimensions, and capable of performing 100's of
> > trillions of FLOP/sec.  Such a system probably contains at least 10^14
> > non-linear separately functional elements, and yet it works as designed.
> If
> > there is a global-local disconnect in the BlueGene L, which there could be
> > depending on your definition, it is not a problem for most of the
> > computation it does.
>
> --
>
> Research Associate: CITANDA
> Post-Graduate Section Head
> Department of Information Systems
> Phone: (+27)-(0)21-6504256
> Fax: (+27)-(0)21-6502280
> Office: Leslie Commerce 4.21
>
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73163220-1ae588

Reply via email to