Josh,

Great post.  Warrants being read multiple times.

You said.

JOSH>> I'm working on a formalism that unifies a very high-level
programming language (whose own code is a basic datatype, as in lisp),
spreading-activation semantic-net-like representational structures, and
subsumption-style real-time control architectures.

Sounds interesting.  I -- and I am sure many others on the list -- look
forward to hearing more about it.

Ed Porter

-----Original Message-----
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Friday, October 19, 2007 12:21 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Poll


In case anyone else is interested, here are my own responses to these
questions. Thanks to all who answered ...

> 1. What is the single biggest technical gap between current AI and
> AGI?
(e.g.
> we need a way to do X or we just need more development of Y or we have
> the
> ideas, just need hardware, etc)

I think the biggest gap can be seen in the actual programs themselves.
Look at
a typical narrow-AI program that *actually does something* (robot car
driver,
for example) and a typical (!) candidate AGI system. They are entirely
different kinds of code, datastructures, etc.

An AGI system should have modules that look like the narrow-AI systems for
all
of the skills it actually has, and when it learns new skills, should then
have more such modules. So it would help to have an easier way to
write "skills" programs and have them interact gracefully.

An AGI needs to be able to watch someone doing something and produce a
program
such that it can now do the same thing. The system needs to be
self-conscious
exactly when it is learning, because that's when it's fitting new
subprograms
into its own architecture. It has to be able to experiment and practice.
It
has to be able to adapt old skills into new ones by analogy, *and
represent
the differences and mappings*, and represent *that* skill in symbolic form
so
that it can learn meta-skills.

> 2. Do you have an idea as to what should should be done about (1) that
> would
> significantly accelerate progress if it were generally adopted?

I'm working on a formalism that unifies a very high-level programming
language
(whose own code is a basic datatype, as in lisp), spreading-activation
semantic-net-like representational structures, and subsumption-style
real-time control architectures.

> 3. If (2), how long would it take the field to attain (a) a baby mind,
> (b) a
> mature human-equivalent AI, if your idea(s) were adopted and AGI
seriously
> pursued?

Let me try to be a little clearer about the concepts, as there was some
dissention w/r their coherence: By a baby mind, I mean a system that can
be
taught rather than programmed to attain new skills. It certainly won't be
like a human baby in any other respect.

Furthermore, it's not clear that we'll know when we have one except in
retrospect. It's virtually certain that in the coming years there will be
many "baby mind" systems that seem to start learning only to run into
unforseen limits. Understanding these limits will be a very natural
scientific process. I imagine that in 10 years we'll have the beginnings
of a
field of "unbounded-learning theory" and ways of predicting when a given
learning system will run out of steam, but we don't now. So we'll probably

only know we have a system that learns at an arguably human level after
teaching it long enough to know it didn't crap out where its predecessors
did.

It could happen as quickly as 5 years, but I wouldn't put a lot of money
on
it. 10 is probably more like it, By that time the hardware to do what I
think
is necessary will not just exist (it does now) but be affordable. That'll
let
a lot more people try a lot more approaches, hopefully building on each
other's successes.

Time to adult human-level AI given the baby mind: zero. This is mostly
because
the bulk of a minimal human experience/common-sense mental inventory will
have been built by hand, or learned by earlier, limited learning
algorithms,
before the real unbounded learner gets here.

> 4. How long to (a) and (b) if AI research continues more or less as it
> is
> doing now?

Here's how I see the field developing: Current approaches are either deep
but
handbuilt (robot drivers) or general but very shallow (Google). In 10
years,
the general ones will be deeper (say, Google can answer 85% of
natural-language questions with apparent comprehension; Novamente produces

very serviceable NPC's in online games) and the narrow ones are broader
but
more inportantly, much more numerous. So throughout the 20-teens, AI will
seem to take off as people hitch the general systems to collections of
narrow
ones. The result will be like someone with an IQ of 90 who has a number of

idiot-savant skills. They'll pass the Turing test. But they still won't
build
their own skills.

So I'll guess (b) in 10 years, (a) in 15, because the Moore's Law thing
still
works, lots of people are trying lots of ideas, but it's a harder problem.
But I could very easily be optimistic by a factor of 2.

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55520400-f582eb

Reply via email to