Hi Frédéric:
Thanks for your reply. I've solved this problem. Have a nice day :)

Best,
Tony

On Wed, Aug 31, 2016 at 12:04 AM, Frédéric Bastien <
[email protected]> wrote:

> Are sure all your filters and input images are in floats? Something seem
> to be in int64.
>
> Also, be sure to update Theano to the dev version.
>
> Fred
>
> Le 30 août 2016 05:41, <[email protected]> a écrit :
>
>> Hi, everyone:
>> I am writing CNN code but there is an error when I compute T.grad in the
>> Theano. Hopefully you can help me to analysis it. I don't know why there is
>> an error and how should I modify it. If you can help me, I will be
>> appreciate of you! Here is some details of dataset. There is a lot of
>> images which has R,G,B for each. And both width and height of each image
>> are 50. I am waiting for your reply :)
>> Here is my code
>>
>> # function definition for CNN
>> srng = RandomStreams()
>> def floatX(X):
>>     return np.asarray(X, dtype=theano.config.floatX)
>>
>> def init_weights(shape):
>>     return theano.shared(floatX(np.random.randn(*shape) * 0.01))
>>
>> def dropout(X, p_use=1.):
>>     if p_use < 1:
>>         p_sampled = srng.binomial(p=p_use, n=1, size=X.shape, 
>> dtype=theano.config.floatX)
>>         X = X * p_sampled / p_use
>>     return X
>>
>> def rectify(X):
>>     return T.maximum(X, 0.)
>>
>> def PRelu(X,a):
>>     return T.maximum(X, 0.) + a * T.minimum(X, 0.)
>>
>> def softmax(X):
>>     e_x = T.exp(X - X.max(axis=1).dimshuffle(0, 'x'))
>>     print e_x
>>     return e_x / e_x.sum(axis=1).dimshuffle(0, 'x')
>>
>> def RMSprop(cost, params, lr=0.001, rho=0.9, epsilon=1e-6):
>>     grads = T.grad(cost=cost, wrt=params)
>>
>>     updates = []
>>     for p, g in zip(params, grads):
>>         acc = theano.shared(p.get_value() * 0.)
>>         acc_new = rho * acc + (1 - rho) * g ** 2
>>         gradient_scaling = T.sqrt(acc_new + epsilon)
>>         g = g / gradient_scaling
>>         updates.append((acc, acc_new))
>>         updates.append((p, p - lr * g))
>>     return updates
>>
>> # model building
>> X = T.ftensor4('x')
>> Y = T.fmatrix('y')
>>
>> # parameters initialization
>> X_train = X_train.reshape(-1, 3, 50, 50)
>> X_test = X_test.reshape(-1, 3, 50, 50)
>> W_conv1 = init_weights((4, 3, 5, 5))
>> b_conv1 = np.zeros((4,))
>> W_conv2 = init_weights((6, 4, 3, 3))
>> b_conv2 = np.zeros((6,))
>> W_fcn = init_weights((54, 70))
>> b_fcn = np.zeros((70,))
>> W_fcn2 = init_weights((70, 43))
>> b_fcn2 = np.zeros((43,))
>>
>> # convolution and pooling
>> maxpool_shape = (2, 2)
>> p_drop_input = 0.8
>> conv_layer1 = rectify(conv2d(X_train, W_conv1, border_mode='full'))
>> subsampling_layer1 = pool_2d(conv_layer1, maxpool_shape, ignore_border=True)
>> out_layer1 = subsampling_layer1
>> out_layer1 = dropout(subsampling_layer1, p_drop_input)
>>
>> p_drop_hidden = 0.6
>> conv_layer2 = rectify(conv2d(out_layer1, W_conv2, border_mode='valid'))
>> subsampling_layer2 = pool_2d(conv_layer2, maxpool_shape, ignore_border=True)
>> out_layer2 = dropout(subsampling_layer2, p_drop_hidden)
>> conv_out = T.flatten(out_layer2, outdim = 2)
>>
>> # fully connected NN
>> hidden = rectify(T.dot(conv_out, W_fcn))
>> hidden = dropout(hidden, p_drop_hidden)
>>
>> py_x = softmax(T.dot(hidden, W_fcn2))
>> y_x = T.argmax(py_x, axis=1)
>>
>> # compute cost and update
>> cost = T.mean(T.nnet.categorical_crossentropy(py_x , Y))
>> params = [W_conv1, W_conv2, W_fcn, W_fcn2]
>> print cost
>> print params
>> updates = RMSprop(cost, params, lr=0.001)
>>
>> And the error is that:
>>
>> Traceback (most recent call last):
>>   File "/PycharmProjects/CNN/CNN.py", line 180, in <module>
>>     updates = RMSprop(cost, params, lr=0.001)
>>   File "/PycharmProjects/CNN/CNN.py", line 116, in RMSprop
>>     grads = T.grad(cost=cost, wrt=params)
>>   File "/Library/Python/2.7/site-packages/theano/gradient.py", line 561, in 
>> grad
>>     grad_dict, wrt, cost_name)
>>   File "/Library/Python/2.7/site-packages/theano/gradient.py", line 1324, in 
>> _populate_grad_dict
>>     rval = [access_grad_cache(elem) for elem in wrt]
>>   File "/Library/Python/2.7/site-packages/theano/gradient.py", line 1279, in 
>> access_grad_cache
>>     term = access_term_cache(node)[idx]
>>   File "/Library/Python/2.7/site-packages/theano/gradient.py", line 1113, in 
>> access_term_cache
>>     input_grads = node.op.grad(inputs, new_output_grads)
>>   File 
>> "/Library/Python/2.7/site-packages/theano/tensor/nnet/abstract_conv.py", 
>> line 828, in grad
>>     d_bottom = bottom.type.filter_variable(d_bottom)
>>   File "/Library/Python/2.7/site-packages/theano/tensor/type.py", line 233, 
>> in filter_variable
>>     self=self))
>> TypeError: Cannot convert Type TensorType(float64, 4D) (of Variable 
>> AbstractConv2d_gradInputs{border_mode='full', subsample=(1, 1), 
>> filter_flip=True, imshp=(None, None, None, None), kshp=(None, None, None, 
>> None)}.0) into Type TensorType(int64, 4D). You can try to manually convert 
>> AbstractConv2d_gradInputs{border_mode='full', subsample=(1, 1), 
>> filter_flip=True, imshp=(None, None, None, None), kshp=(None, None, None, 
>> None)}.0 into a TensorType(int64, 4D).
>>
>> Process finished with exit code 1
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "theano-users" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/theano-users/0hUbG1vy_hU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to