Hi Shane,

I understand your perspective and I think it's a reasonable one.

I think that what you'll get from this approach, if you're lucky, is a kind
of "primitive brain", suitable to control something with general
intelligence around that of a reptile or a very stupid mammal.

Then, you can use the structures/dynamics of this primitive brain as raw
materials, for constructing a more powerful general intelligence.

I think that a realistic general intelligence has got to consist of a set of
components, each carrying out specialized functions but based on the same
essential knowledge-representation and learning dynamics.  Each of these
specialized components embodies a certain in-built "inductive bias" which
guides the learning dynamics within it.  In this context, I think your
experiments may be useful in exploring the space of plausible "essential
knowledge-representation and learning dynamics."

I think that Novamente already has a decent knowledge-rep/learning-dyn core
(based on probabilistic combinatory term logic, probabilsitic inference and
evolutionary learning), but I also think there are LOTS of other choices to
make.  I convinced myself a while ago that a variant of hebbian neural nets
could do the trick as well, although with much less efficiency.  maybe via
your evolutionary method you could discover something even more wonderful
;-)

-- Ben g


> I agree totally.  Indeed I advocate going further and actually evolving
> the fundamental structures and dynamics that drive the system ---
> designing
> them by hand or trying to prove any useful results about what happens in
> a complex recurrent network seems to be really difficult.  Thus perhaps a
> combination of artificial evolution, experimentation, and the development
> of theories to explain what we see is the most likely approach to succeed.
> At least that's my best guess at the moment based on what I've
> seen working
> on various AI/AGI projects in the past.
>
> I sent Ben an email along similar lines a few days back describing my
> own little (extremely slowly moving and incomplete) set of AGI ideas that
> I refer to as the vetta project.  I've pasted part of what I wrote below
> for anybody who is interested.
>
> Cheers
> Shane
>
> -----------------------------------------------------------------------
>
>
> Well it's a mix of things really --- and it changes over time a bit too!
>
> Basically my approach goes something like this:
>
> 1) Build a set of precise "IQ" tests for machines.  These tests cover
>     everything from bacteria level intelligence to super human
> intelligence.
>     It's a reasonably complex web of relations: passive
> predictors, classifiers,
>     simple reactive systems, Markov chains, MDPs, POMDPs and many
> others.....
>     You can prove a whole bunch of relations between all these
> mathematically,
>     indeed that's what I did for the first 4 months of my PhD.
> That's the first
>     step; however it doesn't really capture how difficult a
> problem is.  So for
>     that you need something like like complexity theory (both
> time and space).
>     Anyway, the point is that you can then measure exactly where
> in this complex
>     tree of abilities an AGI system is.  The most general form of
> this is what
>     I call "cybernance" and is closely related to the "intelligence order
>     relation" that appears in the AIXI proofs.
>
> 2) Define a space of systems that should contain an AGI.  This is
> a bit harder
>     to explain.  Again complexity theory comes into it.  So
> things like the
>     fact that I think that the "meta logic" of a system has to be
> very small
>     and thus the building blocks of the system must be quite
> simple.  Also that
>     the processing of the system must have certain
> self-organizing properties
>     such as compression of information in space and time,
> consistency over levels
>     of abstraction and stuff like that.  This is the more
> philosophical part I
>     suppose.  The point is that I need to make this space of
> possible systems
>     as small as I can without making a mistake and excluding a
> working design
>     for an AGI from the set.  Oh, and I should mention that I'm
> thinking of
>     some kind of information processing network here: some kind
> of neural network,
>     Hebbian network, HMM, Bayesian network.  Basically the space
> is a super set
>     of all these things and more.
>
> 3) Genetic programming.  (1) gives us a fine grained
> multi-objective fitness
>     function and (2) defines a search space.  Now I can't just
> run my GA and
>     expect things to work here!  Clearly the space in (2) is
> going to be pretty
>     large.  So at this point it becomes a bit of an experimental
> science and I
>     have to mix things around a bit.  So I'll be restricting the
> tests to just
>     certain very simple objectives and restricting the space to
> smaller subspaces
>     to see what works and what doesn't.  Then try to cross over
> solutions to find
>     systems that work for both etc.  Hopefully at this stage I
> can zero in on
>     promising parts of the space of possible designs.  Perhaps
> even design my
>     own attempts at functioning systems and throw them into the
> evolutionary mix
>     and see if they can breed with other different partial
> solutions to form
>     new and interesting things.
>
> I guess in a sense it's the natural evolution of intelligence but
> on steroids:
> rather than having fitness related to intelligence very
> indirectly via survival
> here we measure a kind of computational intelligence very
> direction and equate it
> with survival.  Also we restrict the space of possible designs as
> much as we can
> get away with to speed things up --- this is the theory side of
> the design I
> suppose.
>
> So the big question then is:  Can I make the theory strong enough
> to make the
> search space small enough so that I can make the series of very
> tiny little
> steps needed to go from a near zero level of intelligence up to high level
> intelligence?
>
> Well, at least that's a one page summary of the basic nature of
> the approach.
> Hopefully it gives you some idea of what I'm thinking.
>
> As for the name "vetta", in case you ever wondered.  In Sanskrit it means,
> "one who has knowledge".  However in Italian it also means
> "summit" or "peak"
> which is a reference of course to the the climbing of the GA
> solutions toward
> the peak of the fitness function, i.e. cybernance.
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to