> Fine. Which idea of anyone's do you believe will directly produce 
> general intelligence  - i.e. will enable an AGI to solve problems in 
> new unfamiliar domains, and pass the general test I outlined?  (And 
> everyone surely agrees, regardless of the test, that an AGI must have 
> "general" intelligence).

Well, as I said before, I don't know which will directly produce general
intelligence and which of them will fail.

I have my own theories about which approaches are more likely to succeed
than others, and about which approaches are fundamentally wrong. However,
most serious ideas seem to have a plausible story, and I'm not ready to
completely rule out any serious idea until it is proven wrong. 

I'll briefly discuss some ideas below. You may not agree with my
interpretation of the approaches, and you may not fully agree with my
argument about why they're plausible... but I think that you surely have to
agree that a plausible argument can be made for most of this research, and
it is clear that the people conducting the research can see themselves as
addressing the crucial questions. 

That is, while I'm not the right person to be arguing the details of these
approaches, I'm confident that many researchers here wouldn't be devoting
their time to their research if they didn't see a coherent picture for how
their work fits into the grand scheme of AGI.

Many apologies to other readers if I've not included your preferred approach
or have misrepresented/misinterpreted your ideas. I've just taken a quick
and informal sample here. The details aren't as important as the overall
message.

Logic
-------------------
An automated theorem prover is an extremely general purpose intelligence.
Consider, for example, how logics may be adapted to many different domains
on the Semantic Web or the increasing strength of competitors in General
Game Playing competitions (surely not long before they're better than the
average human at any novel game?). As to whether logic can be applied to
general purpose embodied intelligent systems remains to be seen - I think
the symbol grounding problem points towards logic not being enough - but
researchers looking into logics with uncertainty or logics that incorporate
iconic representations are effectively exploring a possible "solution" to
the symbol grounding problem.

In other words, these researchers are saying "Logical deduction offers true
'general intelligence' in symbolic domains, and we're trying to adapt that
intelligence to real life situations": a plausible crux idea and worth
pursuing.


Hybrid Systems
-------------------
If we just keep doing what we're doing in "Narrow AI", but look at combining
many components into a coherent architecture then it seems plausible that
we'll eventually end up with a system that is indistinguishable from an
ideal general intelligence. It may not be an elegant answer, but it may be
an answer. This gives good reason to pursue integration.

Consider for example, problems like the DARPA Grand Challenges. In current
systems, obstacles may be specifically identified against a hand-coded
database. In the next generations, these representations might become more
generic and learnt from experience. I see a plausible progression to
increasingly more powerful systems. When the system can identify and learn
the behavior of any new object it encounters (and the rules that govern it),
it may then be able to reason about that object and construct plans that
uses the object in novel ways. At first the planning algorithms seek merely
to visit way-points. Future versions, with richer goals, richer models and
more powerful reasoning may autonomously deduce novel behaviors beyond their
explicit programming (e.g., that truck will run into the pedestrian! my
higher goal of not hurting pedestrians means that the best plan is one in
which I stop in front of the truck so that it crashes into me instead of the
pedestrian).


Genetic Algorithms and other search algorithms
-------------------
If you have a "genetic language" that is sufficiently general, and infinite
computing power, then a good genetic algorithm can eventually solve any
computable problem. Evolution eventually discovered human beings - given
infinite computing power, then at worst you could evolve a virtual human! It
seems reasonable then to consider exploring genetic or other search
algorithms that have a bias towards the kinds of problems encountered by
humans and AGI.


Activation, Similarity, Analogizing, HTM, Confabulation and other "targeted"
approaches
-------------------
There seem to be a lot of groups working on specific modes of thought. You
may not be convinced that they're solving enough of the problem, but it
seems plausible to me that maybe general intelligence really is easy once
you've managed to solve some particular problem. That is, we might have a
80/20 rule or even a 99.9/0.1 rule at play with intelligence.

Maybe the brain only does learn a few techniques for problem solving, and
all the hard work is done in finding analogies between the successful
techniques and the given problem. This would be a reasonable argument for
pursing analogizing.

Maybe the deepest challenge really is in finding what concepts are
associated with other concepts - i.e., that the vast majority of our brain
performs nothing more than primitive association based learning, and our
higher-level cognition is just the "icing on the cake" (that was easily
evolved), but that unlocks all of the general powers of association forming.

As possible evidence that something simple might be 99.9% of intelligence,
it might be worth considering the care taken in experiments testing animal
intelligence. Great care must be taken to rule out conditioning, because
conditioning can be used to successfully explain so many behaviors. Many
seemingly intelligent behaviors demonstrating deep cognition are later
discovered to be better explained by conditioning; a system that can
efficiently learn by conditioning might be prove to be so close to generally
intelligent that it only takes a small effort to close that final gap to
true AGI.
 

Computational Forms of Universal Intelligence
-------------------
The universal AIXI approach to intelligence is a plausible solution to AGI
(under assumptions of infinite computing power and that general intelligence
is a computable problem). It therefore seems reasonable to consider that
computational approximations of this ideal model might lead to AGI.


Neural Networks
-------------------
Even though neural networks seem to have fallen out of favor compared to
their early days, the human brain serves as an existence proof that with
this approach (or related approaches) it is possible to achieve AGI.




My point, again, is that we don't know how the first successful AGI will
work - but we can see many plausible ideas that are being pursued in the
hope of creating something powerful. Some of these are doomed to fail; but
we don't really know which ones they are until we try them. It doesn't seem
fair for you to say that nobody has offered a "crux" idea, and I'd prefer
that people follow their passions rather than insist that everybody should
get hung up on the centuries/millennia old question of what exactly is
intelligence.

-Benjamin Johnston


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93749339-13e4f1

Reply via email to