Yan King Yin wrote:
We need to identify AGI bottlenecks and tackle them systematically.
Basically the AGI problem is to:
1. design a knowledge representation
2. design learning algorithms
3. fill the thing with knowledge
The difficulties are:
1. the KR may be inadequate, but the designer doesn't know it
2. learning algorithms are hard to design, or are inefficient
3. not enough brute force (funding etc) to fill the AGI with knowledge
One thing I suggest is for all/most AGI groups to agree on a common KR,
but it seems that differences among current AGI architectures are
difficult to reconcile.
If the KRs are different, then we're all on our own to design custom
learning algorithms.
Then the next thing we can do is to share knowledge-filling. We may
vote on a common domain to experiment with, and then jointly develop a
large training corpus (sharing the costs / labor). This seems to be the
more feasible option.
YKY
This is too simple by a long way. I can design a KR easily enough, but
KRs are not libraries, they are used by something. The "using part" is
what counts: it might take five days to design the KR, five (or fifty)
years to design and build the system that makes use of the KR.
I had similar feelings about William Pearson's recent message about
systems that use reinforcement learning:
A reinforcement scenario, from wikipedia is defined as
"Formally, the basic reinforcement learning model consists of:
1. a set of environment states S;
2. a set of actions A; and
3. a set of scalar "rewards" in the Reals.
"
Here is my standard response to Behaviorism (which is what the above
reinforcement learning model actually is): Who decides when the rewards
should come, and who chooses what are the relevant "states" and "actions"?
If you find out what is doing *that* work, you have found your
intelligent system. And it will probably turn out to be so enormously
complex, relative to the reinforcement learning part shown above, that
the above formalism (assuming it has not been discarded by then) will be
almost irrelevant.
Just my deux centimes' worth.
On a more positive note, I do think it is possible for AGI researchers
to work together within a common formalism. My presentation at the
AGIRI workshop was about that, and when I get the paper version of the
talk finalized I will post it somewhere.
Richard Loosemore
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]