****
On 5/14/07, Tom McCabe <[EMAIL PROTECTED]> wrote:
Hmmm, this is true. However, if these techniques were
powerful enough to design new, useful AI algorithms,
why is writing algorithms almost universally done by
programmers instead of supercomputers, despite the
fact that programmers only work twelve hours a day and
have to get paid?

- Tom
****

Tom,

The power of GP for automated program learning is still
pretty limited, but in certain particular domains it can
outperform human beings if you give it enough computational
firepower....  Thus I still consider this "narrow AI" rather than
AGI, in spite of its power.  A human has to figure out a clever
way to represent a problem as a genotype/fitness-function
combination and then GP can solve it, but this isn't always
an easy task ... and even then sometimes GP still isn't up to
the job.

To give an idea of the state of the art, there are just **barely**
some evolutionary learning approaches that are able to invent an
O(nlogn) sorting algorithm based on a fitness function that rewards
more efficient sorting.  But this is pretty smart, since most human
programmers would never invent anything much beyond BubbleSort
without a lot of prior knowledge and prodding.

One of the significant weaknesses of GP tech is that existing approaches
for {learning the natural "modularity structure" of a program}
within GP are still pretty hacky.  Koza came up with stuff in
this regard that works OK for circuit design, but isn't so generally
powerful.

Personally, I think that learning how to appropriately break a
programming problem down into modules is not gonna be solvable
via GP or other evolutionary programming methods alone, but will
require pretty subtle cross-domain probabilistic
analogical reasoning.  At least that is how we plan to approach it
in Novamente.  We do have something GP-like (an upgraded, generalized
version of MOSES, see Moshe Look's PhD thesis at www.metacog.org)
but due to its probabilistic foundation it is built for synergetic
interoperation
with Novamente's probabilistic inference subsystem.

-- Ben G



--- Benjamin Goertzel <[EMAIL PROTECTED]> wrote:

> >
> > If this is so, then where are the great, working
> AI
> > algorithms that we supposedly already have that
> run
> > very slowly or can only be run on Blue Gene-type
> > supercomputers? Can you name a single, important,
> > functioning AI algorithm that requires a
> supercomputer
> > to run?
> >
>
> Genetic programming can discover radically different
> and more complex things when run on a supercomputer
> than when run on an ordinary computer.
>
> The standard example would be Koza's use of GP to
> do automated circuit design, as described in the
> third
> book in his GP series.
>
> -- Ben
>
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;




____________________________________________________________________________________Take
the Internet to Go: Yahoo!Go puts the Internet in your pocket: mail, news,
photos & more.
http://mobile.yahoo.com/go?refer=1GNXIC

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to