Jim,

 

In my prior posts I have listed some of the limitations of Shruiti.  The
lack of generalized generalizational and compositional hierarchies directly
relates to the problems of learning from experience generalized rules that
derived from learning in complex environements when the surface
representation of many high level concepts are virtually never the same.
This relates to your issue about failing to model the complexity of
antecedents.

 

But as the Serre paper I have cited  multiple times in this thread shows
that the type of gen/comp hierarchies need are very complex.  His system
model a 160x160 pixel greyscale image patch with 23 million models, probably
each having something like 256 inputs, for a total about 6 billion links,
and this is just to do very quick, feedforward, I-think-I-saw-a-lion
uncertain recognition for 1000 objects.  So for a Shruity system to capture
all the complexities involved in human level perception or semantic
reasoning would require much more in the way of computer resources than
Shastry had.

 

So although Shuiti's system is clearly very limited, it is amazing how much
it does considering how simple it is.

 

But the problem is not just complexity.  As I said, Shruiti has some severe
architectural limitations.  But again, it was smart for Shastri to get his
simplified system up and running first before he made all the architectural
fixes required to make it more capable of more generalized implication and
learning.

 

I have actually spend some time thinking about how to generalize Shruiti.
If they, or there equivalent, are not in Ben's new Novamente book I may take
the trouble to write them up,  but I am expecting a lot form Ben's new book.

 

I did not understand your last sentence

 

Ed Porter

 

-----Original Message-----
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Sunday, July 13, 2008 3:47 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

 

I have read about half of Shastri's 1999 paper "Advances in Shruti- A
neurally motivated model of relational knowledge representation and rapid
inference using temporal synchrony" and I see that it he is describing a
method of encoding general information and then using it to do a certain
kind of reasoning which is usually called inferential although he seems to
have a novel way to do this using what he calls "neural circuits". And he
does seem to touch on the multiple level issues that I am interested in.
The problem is that these kinds of systems, regardless of how interesting
they are, are not able to achieve extensibility because they do not truly
describe how the complexities of the antecedents would have themselves been
achieved (learned) using the methodology described. The unspoken assumption
behind these kinds of studies always seems to be that the one or two systems
of reasoning used in the method should be sufficient to explain how learning
takes place, but the failure to achieve intelligent-like behavior (as is
seen in higher intelligence) gives us a lot of evidence that there must be
more to it.

But, the real problem is just complexity (or complicatedity for Richard's
sake) isn't it?  Doesn't that seem like it is the real problem?  If the
program had the ability to try enough possibilities wouldn't it be likely to
learn after a while?  Well another part of the problem is that it would have
to get a lot of detailed information about how good its efforts were, and
this information would have to be pretty specific using the methods that are
common to most current thinking about AI.  So there seem to be two different
kinds of problems.  But the thing is, I think they are both complexity (or
complicatedity) problems.  Get a working solution for one, and maybe you'd
have a working solution for the other.

I think a working solution is possible, once you get beyond the simplistic
perception of seeing everything as if they were ideologically commensurate
just because you have the belief that you can understand them.
Jim Bromer

 

  _____  


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> |
<https://www.listbox.com/member/?&;
0> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to