Ben Goertzel wrote:
> However, I don't agree with your quantitative estimate that an AGI has to
be
> orders of magnitude bigger than any software project ever attempted.
>
> I agree that many people underestimate the problem, but I think you
> overestimate the problem. And mis-estimate it. I think you overestimate
> the bulk of the problem and underestimate the subtlety of finding
> the right framework and the right algorithms.
>
> The brain is a hugely complex tangled mess of structures and processes,
but
> that doesn't mean that an AGI has to be. AGI does not mean brain
emulation.
> Legs are vastly more complex than wheels, yet wheels are good at moving
> around too. (And wheels can't help you invent artificial legs, whereas a
> nonhuman AGI can potentially help you figure out how to make a more human
> AGI if you want to).
That isn't as close an analogy as it seems. A leg must do many things that
wheels don't - grow, heal, resist microorganisms, raise and lower the body,
cross a wide variety of rough terrain, etc. If we tried to build a machine
with all of the same capabilities, it is not at all clear that it would be
simpler.
The brain does have a few tasks an AGI doesn't have to worry about, like
metabolism and immune response. But these complexities are mostly down at
the cellular level, and I wasn't arguing that an AGI has to duplicate such
things. The biggest simplification I see that is relevant here is the fact
that the brain must self-organize to a large extent, while an AGI could be
coded in its final configuration. But AI projects usually expect most of the
complexity of the final system to emerge through some kind of training
process, which means you're tackling exactly the same problem.
That leaves two popular options that I don't think will work out:
1) You can leave out huge chunks of functionality in the hope that they
aren't needed for intelligence. This might work, but it isn't nearly as safe
as it might seem. Our human version of general intelligence seems to rely
heavily on drafting big specialized systems (like visualization and
language) for use in new domains whose problems happen to have analogous
regularities. Without a lot more knowledge than anyone currently has about
how intelligence works, it seems likely that you'll omit something you can't
get by without.
2) You can ignore all the messy stuff devoted to dealing with the physical
world, like sensory processing and motor control, and concentrate solely on
implementing abstract thought. That sounds promising, except that its
exactly what most AI project have been doing for 50 years and the progress
to date has been underwhelming. Besides which, that only cuts out something
like 40% - 80% of the brain (depending on where you draw the line), which
would still leave you with a gigantic project implementing the features you
decided to keep.
Do you see another option for simplification?
> You mention the vast amount of work that's gone into computer vision and
> audition. That is true, but I think that those disciplines would be a lot
> more tractable if they were carried out together with AGI cognition,
rather
> than separately. Pursuing them "standalone" may make them harder in many
> ways, rather than easier.
Maybe. Maybe not. To be honest, I think most people in this field have a bad
habit of using "general intelligence" as a magic wand to gloss over hard
problems that are going to require specialized mechanisms no matter how
smart the overall system is.
For example, in the case of computer vision, just getting from a 2D array of
pixels to a possible set of object geometries takes a heck of a lot of work,
and it has to be done by fast, dumb code for performance reasons. After that
you have to recognize objects (a narrow problem), build a useful world-model
(another narrow problem), detect and fix visual illusions and other data
corruptions (yet another narrow problem), and so on. Once you have all these
mechanisms you might be able to improve the results a bit by having the AI
think about the output ("Hmm, no, I'm sure that can't really be Santa Clause
on that rooftop. It must be a Christmas display."). But you can't avoid
building the specialized mechanisms in the first place.
> My guess, not surprisingly, is that the Novamente design is close to the
> minimal level of complexity needed ;)
Well, of course. Otherwise you wouldn't be building it. :)
But I do think there would be a lot more progress in AI if more people were
building systems designed merely to solve the next obvious obstacle on the
path to AGI, or to provide a platform for future work. What we have now is
like a football team where the quarterback won't throw a pass unless the
receiver is standing next to the goal post. Lots of long shots, little
progress.
OTOH, at least Novamente has enough internal complexity to reach territory
that hasn't already been explored by classical AI research. I don't expect
it to "wake up", but I expect it will be a lot more productive than those
"One True Simple Formula For Intelligence"-type projects.
Billy Brown
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]