Shane,

It's not something weird with the AGI list, it seems to be something weird
with my own mailserver, which yesterday sent out a bunch of old emails of
mine for some reason i don't understand....  I think the problem is solved
now.

I'll reply to your ideas on AGI shortly ;-)

ben


> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Behalf Of Shane Legg
> Sent: Wednesday, March 03, 2004 6:25 PM
> To: [EMAIL PROTECTED]
> Subject: Re: [agi] Complexity of Evolving an AGI
>
>
> Ciao,
>
> Is something weird going on with the AGI list here?  I just got two
> emails claiming to be from a month or so ago that where actually
> sent today...
>
> Anyway in reply to Ben's email,
>
> Ben Goertzel wrote:
> >
> > But the different trials need not be independent --- we can save the
> > trajectory of each AI's development continuously, and then restart a new
> > branch of "AI x at time y" for any recorded AI x at any
> recorded time point
> > y.
> >
> > Also, we can intentionally form composite AI's by taking
> portions of AI x's
> > mind and portions of AI y's mind and fusing them together into
> a new AI z...
> >
> > So we don't need to follow a strict process of evolutionary
> trial and error,
> > which may accelerate things considerably ---- particularly if, as
> > experimentation progresses, we are able to learn abstract theories about
> > what makes some AI's smarter or stabler or friendlier than others.
>
> I agree totally.  Indeed I advocate going further and actually evolving
> the fundamental structures and dynamics that drive the system ---
> designing
> them by hand or trying to prove any useful results about what happens in
> a complex recurrent network seems to be really difficult.  Thus perhaps a
> combination of artificial evolution, experimentation, and the development
> of theories to explain what we see is the most likely approach to succeed.
> At least that's my best guess at the moment based on what I've
> seen working
> on various AI/AGI projects in the past.
>
> I sent Ben an email along similar lines a few days back describing my
> own little (extremely slowly moving and incomplete) set of AGI ideas that
> I refer to as the vetta project.  I've pasted part of what I wrote below
> for anybody who is interested.
>
> Cheers
> Shane
>
> -----------------------------------------------------------------------
>
>
> Well it's a mix of things really --- and it changes over time a bit too!
>
> Basically my approach goes something like this:
>
> 1) Build a set of precise "IQ" tests for machines.  These tests cover
>     everything from bacteria level intelligence to super human
> intelligence.
>     It's a reasonably complex web of relations: passive
> predictors, classifiers,
>     simple reactive systems, Markov chains, MDPs, POMDPs and many
> others.....
>     You can prove a whole bunch of relations between all these
> mathematically,
>     indeed that's what I did for the first 4 months of my PhD.
> That's the first
>     step; however it doesn't really capture how difficult a
> problem is.  So for
>     that you need something like like complexity theory (both
> time and space).
>     Anyway, the point is that you can then measure exactly where
> in this complex
>     tree of abilities an AGI system is.  The most general form of
> this is what
>     I call "cybernance" and is closely related to the "intelligence order
>     relation" that appears in the AIXI proofs.
>
> 2) Define a space of systems that should contain an AGI.  This is
> a bit harder
>     to explain.  Again complexity theory comes into it.  So
> things like the
>     fact that I think that the "meta logic" of a system has to be
> very small
>     and thus the building blocks of the system must be quite
> simple.  Also that
>     the processing of the system must have certain
> self-organizing properties
>     such as compression of information in space and time,
> consistency over levels
>     of abstraction and stuff like that.  This is the more
> philosophical part I
>     suppose.  The point is that I need to make this space of
> possible systems
>     as small as I can without making a mistake and excluding a
> working design
>     for an AGI from the set.  Oh, and I should mention that I'm
> thinking of
>     some kind of information processing network here: some kind
> of neural network,
>     Hebbian network, HMM, Bayesian network.  Basically the space
> is a super set
>     of all these things and more.
>
> 3) Genetic programming.  (1) gives us a fine grained
> multi-objective fitness
>     function and (2) defines a search space.  Now I can't just
> run my GA and
>     expect things to work here!  Clearly the space in (2) is
> going to be pretty
>     large.  So at this point it becomes a bit of an experimental
> science and I
>     have to mix things around a bit.  So I'll be restricting the
> tests to just
>     certain very simple objectives and restricting the space to
> smaller subspaces
>     to see what works and what doesn't.  Then try to cross over
> solutions to find
>     systems that work for both etc.  Hopefully at this stage I
> can zero in on
>     promising parts of the space of possible designs.  Perhaps
> even design my
>     own attempts at functioning systems and throw them into the
> evolutionary mix
>     and see if they can breed with other different partial
> solutions to form
>     new and interesting things.
>
> I guess in a sense it's the natural evolution of intelligence but
> on steroids:
> rather than having fitness related to intelligence very
> indirectly via survival
> here we measure a kind of computational intelligence very
> direction and equate it
> with survival.  Also we restrict the space of possible designs as
> much as we can
> get away with to speed things up --- this is the theory side of
> the design I
> suppose.
>
> So the big question then is:  Can I make the theory strong enough
> to make the
> search space small enough so that I can make the series of very
> tiny little
> steps needed to go from a near zero level of intelligence up to high level
> intelligence?
>
> Well, at least that's a one page summary of the basic nature of
> the approach.
> Hopefully it gives you some idea of what I'm thinking.
>
> As for the name "vetta", in case you ever wondered.  In Sanskrit it means,
> "one who has knowledge".  However in Italian it also means
> "summit" or "peak"
> which is a reference of course to the the climbing of the GA
> solutions toward
> the peak of the fitness function, i.e. cybernance.
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to