The problem isn't the call to mean, but this:

-y * T.log(p_1)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/u/bastienf/repos/theano/tensor/var.py", line 164, in __mul__
    return theano.tensor.mul(self, other)
  File "/u/bastienf/repos/theano/gof/op.py", line 602, in __call__
    node = self.make_node(*inputs, **kwargs)
  File "/u/bastienf/repos/theano/tensor/elemwise.py", line 616, in make_node
    out_broadcastables)]
  File "/u/bastienf/repos/theano/gof/type.py", line 404, in __call__
    return utils.add_tag_trace(self.make_variable(name))
  File "/u/bastienf/repos/theano/tensor/type.py", line 432, in make_variable
    return self.Variable(self, name=name)
  File "/u/bastienf/repos/theano/tensor/var.py", line 824, in __init__
    raise Exception(msg)
Exception: You are creating a TensorVariable with float64 dtype. You
requested an action via the Theano flag
warn_float64={ignore,warn,raise,pdb}.

y is int32. We respect C and numpy upcast rules. int3*float32 = float64.

Can you change the type of y to int16? Does int16 have enough precission?
If so, this would fix your problem.

On Tue, Sep 20, 2016 at 9:29 AM, <martin.delgado1...@gmail.com> wrote:

> Hi everyone I've been some trouble lately trying to understand why the
> hell theano keeps changing my variable types without being explicitly
> instructed to do so.
> I've enforced in my .theanorc to use float32 and also to raise any warning
> in case of use float64 via the flag warn_float64=raise.
>
> Here is a minimal example which I cannot understand where the change of
> variable type is coming from.
>
> import numpy as np
> import theano
> from theano import tensor as T
> floatX = theano.config.floatX
>
> N = 400
> feats = 784
> n_classes = 2
>
> D = (np.random.randn(N, feats), np.random.randint(size=N, low=0, high=2))
>
> training_steps = 10000
>
> X = T.fmatrix("X")
> y = T.ivector("y")
>
> W = theano.shared(np.random.randn(feats, n_classes).astype(dtype=floatX),
>                   name="W")
>
> b = theano.shared(np.zeros(W.get_value().shape[1]).astype(dtype=floatX),
>                   name="b")
>
> p_1 = 1./(1 + T.exp(-T.dot(X, W) - b))
>
> prediction = p_1 > 0.5
>
> cost = T.mean(-y * T.log(p_1) - (1-y) * T.log(1-p_1),
>               dtype=floatX, acc_dtype=floatX)
>
> cost += 0.01 * T.sum(W ** 2, dtype=floatX)
>
> gW, gb = T.grad(cost, [W, b])
>
> train = theano.function(inputs=[X, y], outputs=[prediction, cost],
>                         updates=((W, W - 0.1 * gW), (b, b - 0.1 * gb)))
>
> predict = theano.function(inputs=[X], outputs=prediction)
>
> for i in range(training_steps):
>     pred, err = train(D[0], D[1])
>
> print("Final model:")
> print(W.get_value())
> print(b.get_value())
> print("target values for D:")
> print(D[1])
> print("prediction on D:")
> print(predict(D[0]))
>
>
> I always get an exception:
>
> Exception: You are creating a TensorVariable with float64 dtype. You
> requested an action via the Theano flag warn_float64={ignore,warn,
> raise,pdf}.
>
> Which points to the cost regarding the cross entropy error.
>
> But I've explicitly told to use float32 both for the inner summation
> calculation in mean operator and and also to cast the output as float32.
>
> I don't understand where this exception is coming from?
>
> Pretty much everything as been set up to float32.
>
> Any help much appreciated?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to