Hi Doug,

Thanks so much for your reply. I'm going to try this out over the
weekend, and will let you know if I run into any problems.

Thanks again,
Alex

On 6/8/07, Douglas S. Blank <[EMAIL PROTECTED]> wrote:

On Fri, June 8, 2007 8:10 am, Douglas S. Blank said:
> [Not sure if this made it to the list. -Doug]
>
> ---------------------------- Original Message ----------------------------
> From:    "Alessandro Warth" <[EMAIL PROTECTED]>
> Date:    Thu, June 7, 2007 2:25 pm
> --------------------------------------------------------------------------
>
> Hello,
>
> I saw an example in the Conx tutorial that implements Pollack's
> Recursive Auto-Associate Memory
> (http://pyrorobotics.org/?page=PyroRAAMExample), and now I would like to
> modify it to implement Lonnie Chrisman's Dual-Ported RAAM
> (http://citeseer.ist.psu.edu/chrisman91learning.html).
>
> A Dual-Ported RAAM is basically a network consisting of two RAAMs that
> _share_ the same hidden layer. They are useful for doing
> transformations on structured data.
>
> Here are a couple of questions:
>
> (1) Dual-Ported RAAMs are trained in three steps. First, you train one of
> the RAAMs to auto-associate on the input. Second, you train the other RAAM
> to auto-associate on the output. Finally, you train the whole network to
> associate the hidden-layer representation of the
> input with the output. Does anyone have any idea whether or not this kind
> of "partial" training (i.e., training parts of a network, which consists
> of specifying which units should be treated as inputs and outputs) is
> possible in Conx, and if so, could you please give me some pointers?
>
> (2) I read somewhere in the Conx documentation that the order in which you
> create layers must be the same order that you are planning on
> connecting things up. Does this make it impossible for Dual-Ported RAAMs
> to be implemented using Conx?
>
> Thanks in advance for your help!
>
> Cheers,
> Alex

Alex,

This is possible, as we have found the need to do things like this quite
often. There are two possibilities:

1) Define a new class that overrides the Network.step() method.

2) Create two Networks that "share" weights.

The first is fairly straightforward, but there are some interactions that
you have to handle properly. There is an example in
pyrobot/brain/governor.py. (Our governor doesn't really train on the
input/output patterns that you give it, but balances the data so you don't
train on any one pattern too much. There is a paper about the idea
http://cs.brynmawr.edu/~dblank/, very last paper.)

The second is a method that we came up with to make complicated multi-step
training easier. It works like:

net1 = Network()
...
net2 = Network()
...
net1.shareWeights(net2, [["hidden", "output"]])

where shareWeights takes the other network, and a list of pairs of layer
names that define a connection of weights. The governor also has an
example of two networks that share weights. Sharing can be very useful
when you have SRN or an SRAAM (sequential RAAM) because you don't have to
undo the context-copy.

The order that you do connect will matter, but if you handle it through
one of the above methods, it shouldn't be a problem.

If you get stuck, email back to this list. Chances are also good that
someone has tried Chrisman's experiment, too, and may have some code or
ideas.

-Doug



_______________________________________________
Pyro-users mailing list
[email protected]
http://emergent.brynmawr.edu/mailman/listinfo/pyro-users

_______________________________________________
Pyro-users mailing list
[email protected]
http://emergent.brynmawr.edu/mailman/listinfo/pyro-users

Reply via email to