I was able to confirm by making a simple test example that scan does not do 
this by default. I also found that calling theano.tensor.grad within the 
step function passed to scan does not perform backprop through time, only 
backprop within that timestep (which is obviously no help in this case).

Any ideas?

On Thursday, September 22, 2016 at 7:15:15 PM UTC-4, Jason T wrote:
>
> I'm trying to train an RNN such that the weights update after each step in 
> the sequence - i.e. if you have really long input and label sequences, you 
> should be able to train on intermediate results instead of waiting until 
> the end. There is nothing in the documentation about this. Does scan 
> already do this or does it wait until the end of the sequence before doing 
> BPTT for each step? I tried changing the values of the shared variables 
> (weights) in the step function given to scan, but it does not appear to be 
> changing anything. Any ideas?
>
> Thanks,
>
> Jason
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to