I don't see any float64 in debugprint result.
Inspecting the code, I am just using floatX e.g.
self.x = theano.shared(name='gx', value=x1.astype(theano.config.floatX))
I did use int32 to cast various indices. In profiling it seems to be
converted into int64.
Will make all the changes based on
Hi,
do you use float? I was meaning float32. The old back-end only suport
float32. So if you use float64 or int32, nothing will compute on the GPU.
The new back-end support many dtypes including float64 and int*. So it
should work better.
Note, if you do operation between float32 and int32, the
Thank you Fred.
Yes I am using device=gpu0. I will switch to the new backend and test again.
On float64, do you mean int64? If yes, am puzzled by that too. In my code I
never explicit cast to int64. Instead I use tensor.ivector() to index
matrices and cast them explicitly into int32. For example:
My guess is that you use the old GPU backend. Can you confirm you use the
Theano flag device=gpu? And that also you have float64 in the graph. The
old backend don't support them. I suggest that you install the just
released 0.10 beta and that you use the new backend with device=cuda.
Also,you can