Hello,

I am trying out an architecture for video sequences where a small CNN 
extracts features from each frame and then feeds them to an LSTM.

|RNN|->|RNN|->|RNN|->|RNN|->|RNN|->|RNN|
  |      |      |      |      |      |
|CNN|  |CNN|  |CNN|  |CNN|  |CNN|  |CNN|  

If possible I would like to train the CNN and the LSTM jointly on full 
sequences (2000 frames).
The forward pass gradient data for the CNN is too big to be stored so I 
would like to know if one can split the training in two parts:

   1. feed forward the samples through the CNN but only store 
   backpropagation data for a subset of the frames
   2. propagate through the LSTM as usual
   3. backpropagate down through the LSTM and update parameters as usual
   4. back propagate down through the CNN for the samples that belong to 
   the subset and update the parameters
   
theano.gradient.grad has a known_grads argument, maybe that could help?


Regards,

Nicolas



-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to