***
Agree. I'm not a AGI-SIM fan, though I'm interested in how far it can go. My
own plan is to get a simple robot after the next version of NARS --- which
will
include temporal reasoning and procedural intepretation, both necessary for
sensorimotor.
****

Procedure interpretation works quite well and generally in Novamente now.
Temporal reasoning is our next frontier on the path to AGI.  I think AGI-SIM
will be a great testing-ground for temporal reasoning (which is to be done
via the same algorithms we use for other sorts of reasoning, but with some
little tweaks...)


****
> Strange, I have the exactly opposite feeling. Making sense from noisy,
> ambiguous data is what intelligence is all about. Abstract reasoning is a
> recent invention, and mounted upon that ancient chassis.

Disagree. The order in evolution is not necessarily a good one to follow
in the design of AGI. Also, "reasoning" is not necessarily "abstract".
Instead,
"reasoning" can be generally understood as "to build new relations according
to the old ones by following certain patterns", and this understanding will,
hopefully, unify high-level cognition and sensorimotor. In this sense,
"reasoning"
can be used for "making sense from noisy, ambiguous data".
****

Pei, I agree in principle.  But I'd add that sensation and action are apt to
require complex and specialized inference control heuristics.  These
heuristics themselves are procedures that can, in principle, be learned via
speculative inference processes.  But in evolution these control heuristics
were presumably arrived at via natural selection rather than explicit
inference.

In Novamente, perception/action inference control heuristics will be learned
via a combination of probabilistic inference (PTL) and evolutionary learning
(combinator-BOA, also probabilistically based but more global and
speculative than PTL).

In NARS, as I understand it, these heuristics will have to be learned via
NARS higher-order inference applied to Implication relationships and
compound terms related to inference-control primitives and perception and
action primitives.  But I'm not confident that NARS contains any mechanisms
adequate to FORM the right compound terms, which may be large and may not be
easily built up from their components in an incremental way (note that
evolutionary learning doesn't need to build things up in an incremental way,
whereas the NARS inference rules do, insofar as I understand them).

-- Ben G


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to