----- Original Message ----- 
From: "Eugen Leitl" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Wednesday, March 21, 2007 4:32 AM
Subject: Re: [agi] Fwd: Numenta Newsletter: March 20, 2007


> On Wed, Mar 21, 2007 at 06:12:45PM +0800, YKY (Yan King Yin) wrote:
>
> We know that logic is easy. People only had to learn to deal with
> it evolutionary only recently, and computers can do serial symbol
> string transformations quite rapidly. Already computer-assisted
> proofs have transformed a branch of mathematics into an empirical
> science.

Your first 3 points aren't without debate.  I have seen no general/good
logic based system yet, have you?  "computer-assisted proofs" is a
specialized domain and doesn't make a dent in AGI IMO.  Another example
might be chess programs but few would say that is AGI.

> Building world/self models in realtime from noisy, incomplete and
inconsistent
> data takes a lot of processing, and parallel processing. For some reason
> traditionally AI considered the logic/mathematics/formal domain for hard,
> and vision as easy. It has turned out exactly the other way round.
> Minsky thought porting SHRDLU to the real world was a minor task.
> Navigation and realtime control, especially cooperatively is hard.

Regardless of how easy Minsky or others thought porting from small
artificial domains to the real thing would be, that is hardly the point.  I
agree 100% that "Navigation and realtime control" is a hard problem but so
are higher level systems like language and semantic learning.  AGI needs
both and needs them to communicate intelligently.

> We've desintegrated into discussion minutiae (which programming language,
etc.)
> but the implicit plan is to build a minimal seed that can bootstrap by
> extracting knowledge from its environment. The seed must be open-ended,
> as in adapting itself to the problem domain. I think vision is a
reasonable
> first problem domain, because insects can do it quite well. You can
presume
> that a machine which has bootstrapped to master vision will find logic a
> piece of cake. Not necessarily the other way round. I understand some
> consider self-modification a specific problem domain, so a system capable
> of targeted self-inspection and self-modification can self-modify itself
> adaptively to a given task, any given task. I think there is absolutely
> no evidence this is doable, and in fact there is some evidence this is
> a Damn Hard problem.

It's funny how many times you have mentioned parallel hardware and other
esoteric minutiae of a very low level but I for one, have enjoyed that info
very much.  I too, have worked with robotics, sonar range finders, micro
controllers etc.  I appreciate how hard it is to deal with real world data,
but that is hardly all that an AGI should be capable of.  What evidence or
experiments do you have to substantiate that "a machine which has
bootstrapped to master vision will find logic a piece of cake"?  To dismiss
whole quadrants of AI development with just a conclusion, doesn't seem very
tolerant or warranted from the current data.  The ability of
"self-inspection and self-modification" is a prerequisite IMO to creating a
system that doesn't need to be hand coded in it's entirety by human
programmers.  It is possible that some relatively small amount of code could
be used and all the smarts could be into the data but history has shown IMO
that as the complexity of the data goes up, sub-languages are created to
make sense of the data. As the level of interpreter goes up, the speed falls
precipitously.  Data and programs have been shown in many ways to be
interchangeable with some problems going one way and others going the other.
Do you believe that some tiny (relative to the size of the AGI) algorithm
can be found that will magically create all the systems required for AGI
intelligence?

I am not sure what you mean exactly by "no evidence".  I have made programs
that create other programs (on the fly) for almost 20 years.  The chances
any of these generated programs would be the same is highly unlikely.  I
don't presume to stretch that into implying I have created a program to
produce programs for "any given task" but I don't think your (or my) brain
can do that either.

> Do you think this is arbitrary and unreasonable?

I would say your conclusions are "arbitrary and unreasonable" even if I am
happy that you are thinking hard about areas of the AGI problem that
desperately need attention.

> I think there's merit in recapitulating the capabilities as they arose
> evolutionary. We're arguably below insect level now, both in capabilities,
> and from the computational potential of the current hardware.

Why does our silicon based hardware always have to be compared with "carbon
based units"?  Computers don't have to have the requirement that they
contain all the information to reproduce themselves as humans do.  Our AGIs
are not limited to a single building block, like human DNA and the brain
synapse.  A single AGI could contain any combination of Von Newman style
computers, FPGAs, parallel computers etc, unlike humans.  Why can't we build
a computer system that reproduces the systems needed for cognition, both
high and low level, without also trying to recreate the mess that lays
underneath in humans?  Do you know how much redundancy is built into our
brains because evolution only had DNA and self replicating carbonware to
work with?  I have read the many emails you have written about how fast
computers must be to show intelligence but even if your wild guesses were
accurate, it isn't relevant if AGI can be realized without duplicating the
brain in silicon.  Not everyone agrees with you that the only way to AGI is
to "build a minimal seed that can bootstrap by extracting knowledge from its
environment".  Humans get most of their intelligence IMO by learning from
other intelligent humans and not so much from "its environment".

> It's best to learn to walk before trying to win the sprinter Olympics, no?

No argument there.  An AGI that can create small efficient programs
automatically for *some* problems would be very useful even if it couldn't
generate them for "any given task".

-- David Clark


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to