Yes conx does not have a C API.

Im sorry , I get confused by FANN wich is a c++ library with a wrapper for python.

I assumed that conx was based on FANN.

I hope to se an incremental learning algorithm in conx.

Best whishes.


----- Original Message ----- From: "Dieter Vanderelst" <[EMAIL PROTECTED]>
To: "Douglas S. Blank" <[EMAIL PROTECTED]>; <pyro-users@pyrorobotics.org>
Sent: Friday, October 26, 2007 12:26 PM
Subject: Re: [Pyro-users] Using the SRN


Dear Douglas,

Thank you for your answer.
I have programmed a net based on your pointers. But I still have some troubles.

This is what I do:

I use the code at http://pyrorobotics.org/?page=SRNModuleExperiments to make an elman net.

Then I want to train this net by setting *single* input and output pattern repetitively:

for each input_vector en output_vector:
network.setInputs([input_vector])
network.setOutputs([output_vector])
network.train() #train the network some *more* on each pass


Is this possible? It seems like the net is resetting itself after each call of train since it considers each pass trough this loop a an epoch? Can this resetting be switched off?


Regards,
Dieter



Douglas S. Blank wrote:
Dieter,

You can use as long of sequences as you want, even from a file.

See, for example, the section on off-line learning here:
http://pyrorobotics.org/?page=Building_20Neural_20Networks_20using_20Conx
or
http://pyrorobotics.org/?page=Robot_20Learning_20using_20Neural_20Networks

You can use the loadDataFromFile or loadInputsFromFile / loadtargetsFromFile.

If you want to look at hidden layer activations, perhaps the easiest method would be to use the SRN.propagate(input=[0,1,0,0,1]) form, and then look at the hidden layer. For example:

srn = SRN()
# .. add layers, train
srn.propagate(input=[0,1,0,0,1])
print srn["hidden"].activation

Another way would be to extend the SRN class and override one of the methods, like postBackprop:

from pyrobot.brain.conx import *
class MySRN(SRN):
   def postBackprop(self, **args):
       print self["hidden"].activation
       SRN.postBackprop(self, **args)

and use the MySRN class exactly the way that you would the SRN class. That would allow you to examine the hidden layer during processing.

You can set batch to 0 and you shouldn't have any problem, either way.

Hope that helps,

-Doug

Dieter Vanderelst wrote:
Hi,

I need some advise on the use off SRN (simple recurrent nets).

I know what the network does but I need some help on the Pyro implementation.
This is what I want to do with the net:
-I want to train a SRN using a single (very long) sequence of patterns. The examples I could find on SRN all define a number of patterns and build a sequence of these on the fly. However, I will read a single long sequence of patterns from a file (experimental data).

-Second, I want to analyze the activation of the hidden nodes in response to each different input pattern. To this, I want present the net ad random with a long sequence of input patterns and register the activations.

-I don't want the network to be trained using batch updating. Given my problem, batch updating is senseless.

So, could somebody assist me in finding the best settings for this kind of requirements?

Thanks,
Dieter Vanderelst

_______________________________________________
Pyro-users mailing list
Pyro-users@pyrorobotics.org
http://emergent.brynmawr.edu/mailman/listinfo/pyro-users


_______________________________________________
Pyro-users mailing list
Pyro-users@pyrorobotics.org
http://emergent.brynmawr.edu/mailman/listinfo/pyro-users


_______________________________________________
Pyro-users mailing list
Pyro-users@pyrorobotics.org
http://emergent.brynmawr.edu/mailman/listinfo/pyro-users

Reply via email to