OK, thanks.
self.truncate_gradient should not be a Python float, it should be an
integer.
This is probably why the dtype of grad_steps is float64, instead of
int64 (or another integer dtype).
Do you have any idea why self.truncate_gradient would not be "-1" (the
default value)? Did you set
Thanks a lot Pascal, I have solved the problem now, the issue was that
self.truncate_gradient was a float instead of being an int.
--
---
You received this message because you are subscribed to the Google Groups
"theano-users" group.
To unsubscribe from this group and stop receiving
The dtype of grad_steps and s_ is float64 while self.truncate_gradient is
a python float.
Sorry I didn't answer it properly previously.
Thanks
--
---
You received this message because you are subscribed to the Google Groups
"theano-users" group.
To unsubscribe from this group and stop
OK, but what is the `dtype` (data type) of those variables?
On 2018-03-06 01:48 PM, Siddhartha Saxena wrote:
grad_steps itself is of with
value "Elemwise{minimum,no_inplace}.0". So here a tensor that is s_ ( of
type Subtensor{::int64}.0}) is being sliced by a variable. Again how it
is
grad_steps itself is of with
value "Elemwise{minimum,no_inplace}.0". So here a tensor that is s_ ( of
type Subtensor{::int64}.0}) is being sliced by a variable. Again how it is
reaching there is what i am unable to understand.
Thanks
--
---
You received this message because you are
Can you open that trace in pdb, and let us know what the dtype of
`grad_steps` here is?
What about the dtype of `self.truncate_gradient`?
And the type of `s_`?
On 2018-03-03 03:12 AM, Siddhartha Saxena wrote:
Hi
I am training a custom LSTM model on theano with LSTM layers as in
Hi
I am training a custom LSTM model on theano with LSTM layers as in
(https://github.com/asheshjain399/NeuralModels/tree/master/neuralmodels/layers/LSTM.py)
and
(https://github.com/asheshjain399/NeuralModels/blob/master/neuralmodels/layers/multilayerLSTM.py).
Now the model that I have