I have read about half of Shastri's 1999 paper "Advances in Shruti— A neurally
motivated model of relational knowledge representation and rapid inference
using temporal synchrony" and I see that it he is describing a method of
encoding general information and then using it to do a certain kind of
reasoning which is usually called inferential although he seems to have a novel
way to do this using what he calls "neural circuits". And he does seem to touch
on the multiple level issues that I am interested in. The problem is that
these kinds of systems, regardless of how interesting they are, are not able to
achieve extensibility because they do not truly describe how the complexities
of the antecedents would have themselves been achieved (learned) using the
methodology described. The unspoken assumption behind these kinds of studies
always seems to be that the one or two systems of reasoning used in the method
should be sufficient to explain how learning
takes place, but the failure to achieve intelligent-like behavior (as is seen
in higher intelligence) gives us a lot of evidence that there must be more to
it.
But, the real problem is just complexity (or complicatedity for Richard's sake)
isn't it? Doesn't that seem like it is the real problem? If the program had
the ability to try enough possibilities wouldn't it be likely to learn after a
while? Well another part of the problem is that it would have to get a lot of
detailed information about how good its efforts were, and this information
would have to be pretty specific using the methods that are common to most
current thinking about AI. So there seem to be two different kinds of
problems. But the thing is, I think they are both complexity (or
complicatedity) problems. Get a working solution for one, and maybe you'd have
a working solution for the other.
I think a working solution is possible, once you get beyond the simplistic
perception of seeing everything as if they were ideologically commensurate just
because you have the belief that you can understand them.
Jim Bromer
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com