Benjamin [as in Johnston :)],

Thankyou for a detailed response which is totally constructive. (An uncommon thing and I appreciate it). And therefore v. helpful.

It's helps me understand how you & others think. I can see more clearly why you believe - reasonably from your POV - that crux ideas have been offered. I hope I can show you why they're not really crux ideas.

I think your approach here *is* representative - &, as you indicate, the details of different approaches to AGI in this discussion, aren't that important. What is common IMO to your and the thinking of others here is that you all start by asking yourselves : what kinds of programming will solve AGI? Because programming is what interests you most and is your life.

And in assessing the value of different approaches, you reason logically, as you do, for example, about GA's:

"If you have a "genetic language" that is sufficiently general, and infinite
computing power, then a good genetic algorithm can eventually solve any
computable problem."

Well, put like that, how can GA's fail? Even if you take a more specific logical formulation like - (loosely off the top of my head) - "GA's can mix a given set of elements any which way to arrive at new, unforeseen approaches to any problem" - it can still sound good, as if it might solve AGI.

However, logical reasoning proves nothing - and can be just as easily used to "disprove" all these approaches, as indeed it has been.

What you have to do in order to produce a true, crux idea, I suggest, is not just define your approach but APPLY IT TO A PROBLEM EXAMPLE OR TWO of general intelligence - show how it might actually work.

You have to show how, for example, your GA might enable your lego-constructing system to solve an unfamiliar problem about building a dam of rocks in water. You must show that even though it had only learned about regularly-shaped bricks, it could neverthless recognize irregularly-shaped rocks as, say, "building blocks"; and even though it had only learned to build on solid ground, it could nevertheless proceed to build on ground submerged in water. [I think BTW, when you try to do this, you will find that GA's *won't* work]

You don't just have to tell me in general terms what your programming approach can do, you have to apply it to specific true AGI END-PROBLEMS - and invite additional tests.

I suggest you look again at any of the approaches you mention, as formally outlined, and I suggest you will not find a single one, that is actually applied to an end-problem, to a true test of its AGI domain-crossing potential. And I think if you go through the archives here you also won't find a single attempt in relevant discussions to do likewise. On the contrary, end-problems are shunned like the plague.

(And you see yet another example of this general philosophy in Arthur Murray's recent formulation of his system/approach - no attempt to apply it to a general intelligence end-problem, only the non-AGI problems that he has carefully selected. Happens again and again. Yet another reason why that "General Test" is so important).

Without application to AGI problem examples, you don't have crux ideas, you only have "hand-waving around the problem" And I quote an eloquent post from a Slashdot discussion of the McKinstry/Singh suicides - which underlines my points - it testifies to the long history of different AI/AGI schools of programming, which all, I suggest, were never really applied to AGI end-problems, or a true AGI test, as they should have been from the very beginning. The post also offers hope because it shows that when you really pressure AI/AGI-ers to apply themselves to end-problems, as with DARPA, you start to get real results - but you do really have to pressure. (I appreciate DARPA's AGI status is debatable):

"""It's discouraging reading this. Especially since I knew some of the Cyc [cyc.com] people back in the 1980s, when they were pursuing the same idea. They're still at it. You can even train their system [cyc.com] if you like. But after twenty years of their claiming "Strong AI, Real Soon Now", it's probably not happening.

I went through Stanford CS back when it was just becoming clear that "expert systems" were really rather dumb and weren't going to get smarter. Most of the AI faculty was in denial about that. Very discouraging. The "AI Winter" followed; all the startups went bust, most of the research projects ended, and there was a big empty room of cubicles labeled "Knowledge Systems Laboratory" on the second floor of the Gates Building. I still wonder what happened to the people who got degrees in "Knowledge Engineering". "Do you want fries with that?"

MIT went into a phase where Rod Brooks took over the AI Lab and put everybody on little dumb robots, at roughly the Lego Mindstorms level. Minsky bitched that all the students were soldering instead of learning theory. After a decade or so, it became clear that reactive robot AI could get you to insect level, but no further. Brooks went into the floor-cleaning business (Roomba, Scooba, Dirt Dog, etc.) with the technology, with some success.

Then came the DARPA Grand Challenge. Dr. Tony Tether, the head of DARPA, decided that AI robotics needed a serious kick in the butt. That's what the DARPA Grand Challenge was really all about. It was made clear to the universities receiving DARPA money that if they didn't do well in that game, the money supply would be turned off. It worked. Levels of effort not before seen on a single AI project produced some good results. Stanford had to replace many of the old faculty, but that worked out well in the end.

This is, at last, encouraging. The top-down strong AI problem was just too hard. Insect-level AI, with no world model, was too dumb. But robot vehicle AI, with world models updated by sensors, is now real. So there's progress. The robot vehicle problem is nice because it's so unforgiving. The thing actually has to work; you can't hand-wave around the problems.

The classic bit of hubris in AI, by the way, is to have a good idea and then think it's generally applicable. AI has been through this too many times - the General Problem Solver, inference by theorem proving, neural nets, expert systems, neural nets again, and behavior-based AI. Each of those ideas has a ceiling which has been reached.

It's possible to get too deep into some of these ideas. The people there are brilliant, but narrow, and the culture supports this. MIT has "Nerd Pride" buttons. """""



Benjamin: MT:>> Fine. Which idea of anyone's do you believe will directly produce
general intelligence  - i.e. will enable an AGI to solve problems in
new unfamiliar domains, and pass the general test I outlined?  (And
everyone surely agrees, regardless of the test, that an AGI must have
"general" intelligence).

Well, as I said before, I don't know which will directly produce general
intelligence and which of them will fail.

I have my own theories about which approaches are more likely to succeed
than others, and about which approaches are fundamentally wrong. However,
most serious ideas seem to have a plausible story, and I'm not ready to
completely rule out any serious idea until it is proven wrong.

I'll briefly discuss some ideas below. You may not agree with my
interpretation of the approaches, and you may not fully agree with my
argument about why they're plausible... but I think that you surely have to
agree that a plausible argument can be made for most of this research, and
it is clear that the people conducting the research can see themselves as
addressing the crucial questions.

That is, while I'm not the right person to be arguing the details of these
approaches, I'm confident that many researchers here wouldn't be devoting
their time to their research if they didn't see a coherent picture for how
their work fits into the grand scheme of AGI.

Many apologies to other readers if I've not included your preferred approach
or have misrepresented/misinterpreted your ideas. I've just taken a quick
and informal sample here. The details aren't as important as the overall
message.

Logic
-------------------
An automated theorem prover is an extremely general purpose intelligence.
Consider, for example, how logics may be adapted to many different domains
on the Semantic Web or the increasing strength of competitors in General
Game Playing competitions (surely not long before they're better than the
average human at any novel game?). As to whether logic can be applied to
general purpose embodied intelligent systems remains to be seen - I think
the symbol grounding problem points towards logic not being enough - but
researchers looking into logics with uncertainty or logics that incorporate
iconic representations are effectively exploring a possible "solution" to
the symbol grounding problem.

In other words, these researchers are saying "Logical deduction offers true
'general intelligence' in symbolic domains, and we're trying to adapt that
intelligence to real life situations": a plausible crux idea and worth
pursuing.


Hybrid Systems
-------------------
If we just keep doing what we're doing in "Narrow AI", but look at combining
many components into a coherent architecture then it seems plausible that
we'll eventually end up with a system that is indistinguishable from an
ideal general intelligence. It may not be an elegant answer, but it may be
an answer. This gives good reason to pursue integration.

Consider for example, problems like the DARPA Grand Challenges. In current
systems, obstacles may be specifically identified against a hand-coded
database. In the next generations, these representations might become more
generic and learnt from experience. I see a plausible progression to
increasingly more powerful systems. When the system can identify and learn
the behavior of any new object it encounters (and the rules that govern it),
it may then be able to reason about that object and construct plans that
uses the object in novel ways. At first the planning algorithms seek merely
to visit way-points. Future versions, with richer goals, richer models and
more powerful reasoning may autonomously deduce novel behaviors beyond their
explicit programming (e.g., that truck will run into the pedestrian! my
higher goal of not hurting pedestrians means that the best plan is one in
which I stop in front of the truck so that it crashes into me instead of the
pedestrian).


Genetic Algorithms and other search algorithms
-------------------
If you have a "genetic language" that is sufficiently general, and infinite
computing power, then a good genetic algorithm can eventually solve any
computable problem. Evolution eventually discovered human beings - given
infinite computing power, then at worst you could evolve a virtual human! It
seems reasonable then to consider exploring genetic or other search
algorithms that have a bias towards the kinds of problems encountered by
humans and AGI.


Activation, Similarity, Analogizing, HTM, Confabulation and other "targeted"
approaches
-------------------
There seem to be a lot of groups working on specific modes of thought. You
may not be convinced that they're solving enough of the problem, but it
seems plausible to me that maybe general intelligence really is easy once
you've managed to solve some particular problem. That is, we might have a
80/20 rule or even a 99.9/0.1 rule at play with intelligence.

Maybe the brain only does learn a few techniques for problem solving, and
all the hard work is done in finding analogies between the successful
techniques and the given problem. This would be a reasonable argument for
pursing analogizing.

Maybe the deepest challenge really is in finding what concepts are
associated with other concepts - i.e., that the vast majority of our brain
performs nothing more than primitive association based learning, and our
higher-level cognition is just the "icing on the cake" (that was easily
evolved), but that unlocks all of the general powers of association forming.

As possible evidence that something simple might be 99.9% of intelligence,
it might be worth considering the care taken in experiments testing animal
intelligence. Great care must be taken to rule out conditioning, because
conditioning can be used to successfully explain so many behaviors. Many
seemingly intelligent behaviors demonstrating deep cognition are later
discovered to be better explained by conditioning; a system that can
efficiently learn by conditioning might be prove to be so close to generally
intelligent that it only takes a small effort to close that final gap to
true AGI.


Computational Forms of Universal Intelligence
-------------------
The universal AIXI approach to intelligence is a plausible solution to AGI
(under assumptions of infinite computing power and that general intelligence
is a computable problem). It therefore seems reasonable to consider that
computational approximations of this ideal model might lead to AGI.


Neural Networks
-------------------
Even though neural networks seem to have fallen out of favor compared to
their early days, the human brain serves as an existence proof that with
this approach (or related approaches) it is possible to achieve AGI.




My point, again, is that we don't know how the first successful AGI will
work - but we can see many plausible ideas that are being pursued in the
hope of creating something powerful. Some of these are doomed to fail; but
we don't really know which ones they are until we try them. It doesn't seem
fair for you to say that nobody has offered a "crux" idea, and I'd prefer
that people follow their passions rather than insist that everybody should
get hung up on the centuries/millennia old question of what exactly is
intelligence.

-Benjamin Johnston


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.516 / Virus Database: 269.19.20/1260 - Release Date: 2/5/2008 9:44 AM



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=94046594-be0b18

Reply via email to