Charles,
I don't see - no doubt being too stupid - how what you are saying is going
to make a categorizer into more than that - into a system that can, say, go
on to learn various logic's, or how to build a house or other structures or
tell a story - that can be a *general* intelligence.
What struck me about the overall discussion of NARS' logical capabilities,
firstly, was that they all depended - & I think you may have made this
point - on everyone's *common sense* interpretations of inheritance and
other relations and the logic generally. In other words, any logic is - and
always will be - a very *secondary* sign system for both representing and
reasoning about the world. It is a highly evolved derivative of more basic,
common sense systems in the brain - and, like language itself, has
continually to be "made sense of" by the brain. (That's why I would suspect
that all of you, however versed in logic you are, will, while looking at
those logical propositions, go fuzzy from time to time - when your brain
can't for a while literally make sense of them).
A hierarchy of abstract/ concrete sign systems, grounded in the senses, is -
I believe - essential for any AGI and general learning - and, NARS, AFAICT,
lacks that.
Secondly, I don't see how what you are saying will give NARS the ability to
*create* new rules and strategies for its activities, (that are not derived
from existing rules). AFAICT it simply applies logic and follows rules, even
though they include rules for modifying rules. It cannot, like Pei or Bayes
have done, create or fundamentally extend logics. If so, it is still narrow
AI, not AGI.
(There is, I repeat, a major need for a philosophical distinction between AI
and AGI - in talking about the area of the last paragraph, I think we all
flounder and grope for terms).
Mike Tintner wrote:
Charles H:as I understand it, this still wouldn't be an AGI, but merely a
categorizer.
That's my understanding too. There does seem to be a general problem in
the field of AGI, distinguishing AGI from narrow AI - philosophically. In
fact, I don't think I've seen any definition of AGI or intelligence that
does.
But *do* notice that the terminal nodes are uninterpreted. This means
that they could be assigned, e.g., procedural values.
Because of this, even though the current design (as I understand it) of
NARS is purely a categorizer, it's not limited in what it's extensions and
embedding environment can be. It would be a trivial extension to allow
terminal nodes to have a type, and that what was done when a terminal node
was generated could depend upon that type.
(There's a paper called "wang.roadmap.pdf" that I *must* get around to
reading!)
P.S.: In the paper on computations it seems to me that items of high
durability should not be dropped from the processing queue even if it
becomes full of higher priority tasks. There should probably be a
"postponed tasks" location where things like garbage collection and
database sanity checking and repair can be saved to be done during future
idle times.
Version: 7.5.488 / Virus Database: 269.14.6/1060 - Release Date:
09/10/2007 16:43
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=52128101-e8e3f7