If your model returns the embedding matrix in self.model.params here
https://github.com/attapol/nn_discourse_parser/blob/534a633d87d671126f135ccebfaa9817947730a7/nets/learning.py#L186
then it will be updated since everything returned in that line gets 
optimized. 

If you don't want it to be optimized you'll need to remove it from that 
list or remove it from self.sgs_updates and self.parameter_updates which 
get passed to your train function updates.

On Tuesday, February 21, 2017 at 5:39:35 AM UTC-8, Jimmy wrote:
>
> Hi,
>
> I have limited experience with Theano, but have found myself in the 
> position of modifying a feed forward neural network with an initial word 
> embedding layer. The goal is to keep the word embeddings static rather than 
> tuning their weights in the training process. Using Keras, this is simply 
> done by using the `trainable=False` parameter 
> <https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html>
>  
> when loading the embedding weights, but I'm unsure how to do this directly 
> in Theano. 
>
> The code I'm modifying can be found here 
> <https://github.com/attapol/nn_discourse_parser/blob/master/nets/learning.py>,
>  
> and it is the AdagradTrainer class starting on line 175. I don't expect you 
> to analyze the code and come up with the solution, but I would very much 
> appreciate if you pointed me towards a minimal example on how updating 
> embedding weights and how to keep them static generally work.
>
> Thank you,
> Jimmy
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to