Hey Fred,
good catch. Yeah T.fmatrix() is already float32 by default so changing it 
with .astype() wouldn't do anything good. But the fact that T.mean() casts 
an input of float32 to float64, apply the mean operator and then recast the 
output to float32 is that efficient either. It's a waste of computation and 
time.

On Tuesday, September 20, 2016 at 11:10:36 PM UTC+1, nouiz wrote:
>
> Le 20 sept. 2016 10:20, <martin.de...@gmail.com <javascript:>> a écrit :
> >
> > I'll answer my own question. I've figured out where the error was.
> >
> > 1. I had to explicitly define the symbolic variables as type float32
> > e..g X = T.fmatrix('X').astype(dtype=float32)
>
> This is not useful. The astype should not change anything here. Did you 
> change Theano? What is the output of T.fmatrix().type and X.type. it should 
> be the same.
>
> >
> > Which is really a bummer if you ask me, I thought the whole point of 
> having a .theanorc file and defining your type as float32 was to remove the 
> redundancy to explicitly define the variables as float32 let alone the 
> operations such mean() and sum() where you also have to define the data 
> type as float32.
> >
> > 2. the other change was to either cast the data type to float32 during 
> the input to the predict() function or explicitly define the flag 
> allow_input_downcast to True in the theano.function(), or both.
> >
> > I would really like to define the type once only in your .theanorc for 
> instance and everything else to take care of itself. So that we could focus 
> on higher more abstract cognitive tasks rather than to always keep thinking 
> what does that variable cast to?
> >
> > I hope that the theano developers can have a look on that and possibly 
> add it as an enhancement in the next release.
> > I would also love to see some refactoring to the library since as a user 
> it feels a little cluttered to me. (I'm not trying to be judgmental, don't 
> take this the wrong way!)
> > Finally I would love to have support for newer compilers e.g. (gcc6, 
> icc/icpc) and libraries e.g. intel-mkl.
> >
> > My thanks to the community. Keep up the good work!
> >
> >
> > On Tuesday, September 20, 2016 at 2:29:58 PM UTC+1, 
> martin.de...@gmail.com wrote:
> >>
> >> Hi everyone I've been some trouble lately trying to understand why the 
> hell theano keeps changing my variable types without being explicitly 
> instructed to do so.
> >> I've enforced in my .theanorc to use float32 and also to raise any 
> warning in case of use float64 via the flag warn_float64=raise.
> >>
> >> Here is a minimal example which I cannot understand where the change of 
> variable type is coming from.
> >>
> >> import numpy as np
> >> import theano
> >> from theano import tensor as T
> >> floatX = theano.config.floatX
> >>
> >> N = 400                                   
> >> feats = 784                               
> >> n_classes = 2
> >>
> >> D = (np.random.randn(N, feats), np.random.randint(size=N, low=0, 
> high=2))
> >>
> >> training_steps = 10000
> >>
> >> X = T.fmatrix("X")
> >> y = T.ivector("y")
> >>
> >> W = theano.shared(np.random.randn(feats, 
> n_classes).astype(dtype=floatX),
> >>                   name="W")
> >>
> >> b = theano.shared(np.zeros(W.get_value().shape[1]).astype(dtype=floatX),
> >>                   name="b")
> >>
> >> p_1 = 1./(1 + T.exp(-T.dot(X, W) - b))
> >>
> >> prediction = p_1 > 0.5
> >>
> >> cost = T.mean(-y * T.log(p_1) - (1-y) * T.log(1-p_1),
> >>               dtype=floatX, acc_dtype=floatX)
> >>
> >> cost += 0.01 * T.sum(W ** 2, dtype=floatX)
> >>
> >> gW, gb = T.grad(cost, [W, b])
> >>
> >> train = theano.function(inputs=[X, y], outputs=[prediction, cost],
> >>                         updates=((W, W - 0.1 * gW), (b, b - 0.1 * gb)))
> >>
> >> predict = theano.function(inputs=[X], outputs=prediction)
> >>
> >> for i in range(training_steps):
> >>     pred, err = train(D[0], D[1])
> >>
> >> print("Final model:")
> >> print(W.get_value())
> >> print(b.get_value())
> >> print("target values for D:")
> >> print(D[1])
> >> print("prediction on D:")
> >> print(predict(D[0]))
> >>
> >>
> >> I always get an exception:
> >>
> >> Exception: You are creating a TensorVariable with float64 dtype. You 
> requested an action via the Theano flag 
> warn_float64={ignore,warn,raise,pdf}.
> >>
> >> Which points to the cost regarding the cross entropy error. 
> >>
> >> But I've explicitly told to use float32 both for the inner summation 
> calculation in mean operator and and also to cast the output as float32.
> >>
> >> I don't understand where this exception is coming from?
> >>
> >> Pretty much everything as been set up to float32.
> >>
> >> Any help much appreciated?
> >>
> > -- 
> >
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "theano-users" group.
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to theano-users...@googlegroups.com <javascript:>.
> > For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to