[Not sure if this made it to the list. -Doug]

---------------------------- Original Message ----------------------------
From:    "Alessandro Warth" <[EMAIL PROTECTED]>
Date:    Thu, June 7, 2007 2:25 pm
--------------------------------------------------------------------------

Hello,

I saw an example in the Conx tutorial that implements Pollack's
Recursive Auto-Associate Memory
(http://pyrorobotics.org/?page=PyroRAAMExample), and now I would like to
modify it to implement Lonnie Chrisman's Dual-Ported RAAM
(http://citeseer.ist.psu.edu/chrisman91learning.html).

A Dual-Ported RAAM is basically a network consisting of two RAAMs that
_share_ the same hidden layer. They are useful for doing
transformations on structured data.

Here are a couple of questions:

(1) Dual-Ported RAAMs are trained in three steps. First, you train one of
the RAAMs to auto-associate on the input. Second, you train the other RAAM
to auto-associate on the output. Finally, you train the whole network to
associate the hidden-layer representation of the
input with the output. Does anyone have any idea whether or not this kind
of "partial" training (i.e., training parts of a network, which consists
of specifying which units should be treated as inputs and outputs) is
possible in Conx, and if so, could you please give me some pointers?

(2) I read somewhere in the Conx documentation that the order in which you
create layers must be the same order that you are planning on
connecting things up. Does this make it impossible for Dual-Ported RAAMs
to be implemented using Conx?

Thanks in advance for your help!

Cheers,
Alex



-- 
Douglas S. Blank
Associate Professor, Bryn Mawr College
http://cs.brynmawr.edu/~dblank/
Office: 610 526 6501

_______________________________________________
Pyro-users mailing list
[email protected]
http://emergent.brynmawr.edu/mailman/listinfo/pyro-users

Reply via email to