Great, that's exactly what I was looking for! Thanks again.

Chris

On 11/29/06, Douglas S. Blank <[EMAIL PROTECTED]> wrote:
On Wed, November 29, 2006 10:36 pm, Chris S said:
> I'm not sure if this makes any sense, but is it possible to add,
> remove, or deactivate certain input nodes in a Conx neural network
> after it's been trained?
>
> For example, consider a network that takes 3-inputs representing
> colors red, green, and blue. The network has N outputs, each
> indicating whether or not the color arraignment matches a particular
> object. Suppose I've trained this network on a corpus to match r,g,b
> colors to N objects, but now I want to train it on a different corpus
> missing the color red.
>
> I know in this trivial example it would probably be easiest to build
> and train a separate network, but imagine if this example were scaled
> to a larger non-trivial network, one with dozens or hundreds of input
> nodes. In that scenario I've spent a considerable amount of time
> training the network, so I wouldn't want to start from scratch if I
> could help it.
>
> Would it be possible in this case to "turn-off" the red input, and
> train/propagate using only the other two colors, but still have the
> option to "re-activate" the input later to again use the network with
> all three colors?

Chris,

Yes, that makes sense and we do things similar quite often. Conx (the
neural network toolkit in Pyro) has two concepts that can help here:
"frozen" weights and layers; and "active" weights and layers.

You can set any group of weights (a Connection object) or a layer (a Layer
object) so that the weights are "frozen". This just means that the weights
won't change, even if backprop says they should. (Layer objects contain
the bias/threshold weights, so that is what gets frozen there).

But it sounds like you want the second option. You can set a layer or
weight connection group to either be active or not. If a layer/weight
group is not active (set equal to 0) then it won't participate in forward
propagation of activation, nor will it be involved with backprop of error
and changing of weights.

To make sure you get the right effect, you can step through the network,
watching what changes, like this:

% python
>>> from pyrobot.brain.conx import *
Conx, version 1.233 (regular speed)
>>> net = Network()
Conx using seed: 1164951718.93
>>> net.addLayers(2, 3, 1)
>>> net.interactive = 1
>>> net.step(input = [0, 0], output = [0])
Display network 'Backprop Network':
=============================
Layer 'output': (Kind: Output, Size: 1, Active: 1, Frozen: 0)
Target    :  0.00
Activation:  0.52
=============================
Layer 'hidden': (Kind: Hidden, Size: 3, Active: 1, Frozen: 0)
Activation:  0.50 0.48 0.50
=============================
Layer 'input': (Kind: Input, Size: 2, Active: 1, Frozen: 0)
Activation:  0.00 0.00
>>> net.step(input = [0, 0], output = [0])

In the network display you can see which layers are active and frozen.

You can only turn entire layers off/on (actually a "bank" since you can
have these groups next to each other, rather than being technically
"layered"). However, you can have layers of single units.

Hope that helps,

-Doug

> Regards,
> Chris
> _______________________________________________
> Pyro-users mailing list
> [email protected]
> http://emergent.brynmawr.edu/mailman/listinfo/pyro-users
>


--
Douglas S. Blank
Associate Professor, Bryn Mawr College
http://cs.brynmawr.edu/~dblank/
Office: 610 526 601


_______________________________________________
Pyro-users mailing list
[email protected]
http://emergent.brynmawr.edu/mailman/listinfo/pyro-users

Reply via email to