On Wednesday 14 March 2007 06:44, Ben Goertzel wrote:

> Here is a brain question though: In your approach, the recursive
> build-up of patterns-among-patterns-
> ...-among-patterns seems to rely on the ability to treat transformations
> (e.g. matrices, or perhaps
> nonlinear transformations represented by NN modules?) as inputs to other
> transformations.
>
> How do you hypothesize this occurring in the brain?

I don't really know enough neuro to give you a principled answer to that. But 
all the NN models I know about are actually capable of non-linear functions, 
especially in multi-layer formations. 

In my system there are two answers.

(a) you go through multiple modules, with the same effect as clipping the 
linear transformations in NNs with the sigmoid;

(b) I cheat shamelessly, allowing myself to use any function I want and 
assuming I could have built it from the primitives if I'd really tried. 
This leaves me, at the moment, with a chunk of opaque circuitry in the system.

> I.e. how do you hypothesize the brain implementing the transformation
>
> [connectivity / connection-strength pattern of neural subnet A] ==>
> [list of input activations to neural subnet B]
>
> What neural subsystems, what dynamics within them, lead to this?

In my thinking I've dropped the neural inspiration and everything is in terms 
of pure math. Each module (probably better drop that term, it's ambiguous and 
confusing: let's use IAM, interpolating associative memory, instead), each 
IAM is simply a relation, a set of points in N-space, with an implied surface 
or manifold stretched between them. If you wire it feedforward, it's a 
function; if you wire it recurrently, it's a FSA or even a Hopfield net-like 
continuous-state automaton. 

> It seems that you need this in your picture.

> Is this something that you envision happening all the time within LTM,
> or something that you envision
> happening only when transformations are "active" in working memory?

At the moment I can only speculate as to the difference between LTM and 
working memory. I don't yet have a good theory of how writing, forgetting, 
and merging of traces happens, and I guess it's a phenomenon of that. If you 
practice something for 5 minutes, you're apt to forget it; for an hour, 
you're apt to remember it. 

There's clearly a distinction between the dynamically stable signal patterns 
running through the system and the stored surfaces; to some extent the parts 
of the surfaces addressed by the current dynamic configuration counts as 
"working" and the part that's inaccessible because it isn't being addressed 
counts as "long term". But there's probably more to the story than both of 
these notions together.

> I don't think you can demonstrate anything exciting with 10-100
> concepts, but I agree that you can  come to understand some aspects of your
> theory better via this sort of experimentation.

That's the main idea. But I'll bet I can control a 10-axis robot with a 
100-IAM system.

Josh


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to