Hi Pascal:
Thanks for your help! You are right! The error is size of fully-connected
layer. I've adjusted it. Have a nice day :)

Best,
Tony

On Wed, Aug 31, 2016 at 12:31 AM, Pascal Lamblin <[email protected]>
wrote:

> Is it possible that some weight matrix does not have the right size?
> It looks like it expects an input of size 54 but receives 864 instead.
>
> If you took the code from the MNIST tutorial and changed the input
> shape, you will have to at least adjust the size of the fully-connected
> layer. For better results, you may need to change the architecture of
> the network (number of layers, filter size, pooling factor, ...).
>
> On Tue, Aug 30, 2016, Tony Lee wrote:
> > Hi Pascal:
> > Thanks for your reply which is really help for me! And I have solved this
> > problem now. But I still have another problem when I train the model.
> > the original data is r,g,b image with size of (3,50,50) for each
> > Firstly I reshape the X_train and X_test as you can see in my code. After
> > building the model, I want to train it. Here is the code of my training
> > model.
> >
> > print "training and result:"
> >
> > for i in range(1):
> >
> > print "iteration %d" % (i + 1)
> > for start in range(0, len(X_train), batch_size):
> > x_batch = X_train[start:start + batch_size]
> > y_batch = y_train[start:start + batch_size]
> > cost = train(x_batch, y_batch)
> >
> > predictions_test = predict(X_test)
> > accuracy = np.mean(predictions_test == y_test)
> > print "accuracy: %.5f" % accuracy
> >
> >
> > There are a few bug when I run this code:
> >
> > Traceback (most recent call last):
> >   File "/CNN.py", line 194, in <module>
> >     cost = train(x_batch, y_batch)
> >
> > ValueError: ('matrices are not aligned', (20, 864), (54, 70))
> > Apply node that caused the error: Dot22(Reshape{2}.0,
> > <TensorType(float64, matrix)>)
> > Toposort index: 49
> > Inputs types: [TensorType(float64, matrix), TensorType(float64, matrix)]
> > Inputs shapes: [(20, 864), (54, 70)]
> > Inputs strides: [(6912, 8), (560, 8)]
> > Inputs values: ['not shown', 'not shown']
> > Outputs clients: [[Elemwise{maximum,no_inplace}(Dot22.0,
> > TensorConstant{(1, 1) of 0.0}), Elemwise{Composite{(i0 * EQ(i1, i2) *
> > i3)}}[(0, 0)](Dot22Scalar.0, Elemwise{maximum,no_inplace}.0, Dot22.0,
> > Elemwise{Composite{Cast{float64}(LT(i0, i1))}}[(0, 0)].0)]]
> >
> > HINT: Re-running with most Theano optimization disabled could give you
> > a back-trace of when this node was created. This can be done with by
> > setting the Theano flag 'optimizer=fast_compile'. If that does not
> > work, Theano optimizations can be disabled with 'optimizer=None'.
> > HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint
> > and storage map footprint of this apply node.
> >
> >
> > Is it because of my the wrong process when I train the model? Could
> > you please tell me how to train the model? Which kind of data format I
> > should have on this "cost = train(x_batch, y_batch)" part? I am
> > looking forward to your reply. Thank you!
> >
> >
> > Best,
> >
> > Tony
> >
> >
> >
> > On Tue, Aug 30, 2016 at 7:17 PM, Pascal Lamblin <
> [email protected]>
> > wrote:
> >
> > > For some reason, some of the 4D arrays in your graph seem to be arrays
> > > of type int64, which throws off the logic trying to ensure gradients
> > > have the same precision as their forward equivalent.
> > >
> > > You can try to use theano.printing.debugprint(cost, print_type=True)
> to
> > > find out which variables are int.
> > >
> > > On Tue, Aug 30, 2016, [email protected] wrote:
> > > > Hi, everyone:
> > > > I am writing CNN code but there is an error when I compute T.grad in
> the
> > > > Theano. Hopefully you can help me to analysis it. I don't know why
> there
> > > is
> > > > an error and how should I modify it. If you can help me, I will be
> > > > appreciate of you! Here is some details of dataset. There is a lot of
> > > > images which has R,G,B for each. And both width and height of each
> image
> > > > are 50. I am waiting for your reply :)
> > > > Here is my code
> > > >
> > > > # function definition for CNN
> > > > srng = RandomStreams()
> > > > def floatX(X):
> > > >     return np.asarray(X, dtype=theano.config.floatX)
> > > >
> > > > def init_weights(shape):
> > > >     return theano.shared(floatX(np.random.randn(*shape) * 0.01))
> > > >
> > > > def dropout(X, p_use=1.):
> > > >     if p_use < 1:
> > > >         p_sampled = srng.binomial(p=p_use, n=1, size=X.shape,
> > > dtype=theano.config.floatX)
> > > >         X = X * p_sampled / p_use
> > > >     return X
> > > >
> > > > def rectify(X):
> > > >     return T.maximum(X, 0.)
> > > >
> > > > def PRelu(X,a):
> > > >     return T.maximum(X, 0.) + a * T.minimum(X, 0.)
> > > >
> > > > def softmax(X):
> > > >     e_x = T.exp(X - X.max(axis=1).dimshuffle(0, 'x'))
> > > >     print e_x
> > > >     return e_x / e_x.sum(axis=1).dimshuffle(0, 'x')
> > > >
> > > > def RMSprop(cost, params, lr=0.001, rho=0.9, epsilon=1e-6):
> > > >     grads = T.grad(cost=cost, wrt=params)
> > > >
> > > >     updates = []
> > > >     for p, g in zip(params, grads):
> > > >         acc = theano.shared(p.get_value() * 0.)
> > > >         acc_new = rho * acc + (1 - rho) * g ** 2
> > > >         gradient_scaling = T.sqrt(acc_new + epsilon)
> > > >         g = g / gradient_scaling
> > > >         updates.append((acc, acc_new))
> > > >         updates.append((p, p - lr * g))
> > > >     return updates
> > > >
> > > > # model building
> > > > X = T.ftensor4('x')
> > > > Y = T.fmatrix('y')
> > > >
> > > > # parameters initialization
> > > > X_train = X_train.reshape(-1, 3, 50, 50)
> > > > X_test = X_test.reshape(-1, 3, 50, 50)
> > > > W_conv1 = init_weights((4, 3, 5, 5))
> > > > b_conv1 = np.zeros((4,))
> > > > W_conv2 = init_weights((6, 4, 3, 3))
> > > > b_conv2 = np.zeros((6,))
> > > > W_fcn = init_weights((54, 70))
> > > > b_fcn = np.zeros((70,))
> > > > W_fcn2 = init_weights((70, 43))
> > > > b_fcn2 = np.zeros((43,))
> > > >
> > > > # convolution and pooling
> > > > maxpool_shape = (2, 2)
> > > > p_drop_input = 0.8
> > > > conv_layer1 = rectify(conv2d(X_train, W_conv1, border_mode='full'))
> > > > subsampling_layer1 = pool_2d(conv_layer1, maxpool_shape,
> > > ignore_border=True)
> > > > out_layer1 = subsampling_layer1
> > > > out_layer1 = dropout(subsampling_layer1, p_drop_input)
> > > >
> > > > p_drop_hidden = 0.6
> > > > conv_layer2 = rectify(conv2d(out_layer1, W_conv2,
> border_mode='valid'))
> > > > subsampling_layer2 = pool_2d(conv_layer2, maxpool_shape,
> > > ignore_border=True)
> > > > out_layer2 = dropout(subsampling_layer2, p_drop_hidden)
> > > > conv_out = T.flatten(out_layer2, outdim = 2)
> > > >
> > > > # fully connected NN
> > > > hidden = rectify(T.dot(conv_out, W_fcn))
> > > > hidden = dropout(hidden, p_drop_hidden)
> > > >
> > > > py_x = softmax(T.dot(hidden, W_fcn2))
> > > > y_x = T.argmax(py_x, axis=1)
> > > >
> > > > # compute cost and update
> > > > cost = T.mean(T.nnet.categorical_crossentropy(py_x , Y))
> > > > params = [W_conv1, W_conv2, W_fcn, W_fcn2]
> > > > print cost
> > > > print params
> > > > updates = RMSprop(cost, params, lr=0.001)
> > > >
> > > > And the error is that:
> > > >
> > > > Traceback (most recent call last):
> > > >   File "/PycharmProjects/CNN/CNN.py", line 180, in <module>
> > > >     updates = RMSprop(cost, params, lr=0.001)
> > > >   File "/PycharmProjects/CNN/CNN.py", line 116, in RMSprop
> > > >     grads = T.grad(cost=cost, wrt=params)
> > > >   File "/Library/Python/2.7/site-packages/theano/gradient.py", line
> > > 561, in grad
> > > >     grad_dict, wrt, cost_name)
> > > >   File "/Library/Python/2.7/site-packages/theano/gradient.py", line
> > > 1324, in _populate_grad_dict
> > > >     rval = [access_grad_cache(elem) for elem in wrt]
> > > >   File "/Library/Python/2.7/site-packages/theano/gradient.py", line
> > > 1279, in access_grad_cache
> > > >     term = access_term_cache(node)[idx]
> > > >   File "/Library/Python/2.7/site-packages/theano/gradient.py", line
> > > 1113, in access_term_cache
> > > >     input_grads = node.op.grad(inputs, new_output_grads)
> > > >   File "/Library/Python/2.7/site-packages/theano/tensor/nnet/
> abstract_conv.py",
> > > line 828, in grad
> > > >     d_bottom = bottom.type.filter_variable(d_bottom)
> > > >   File "/Library/Python/2.7/site-packages/theano/tensor/type.py",
> line
> > > 233, in filter_variable
> > > >     self=self))
> > > > TypeError: Cannot convert Type TensorType(float64, 4D) (of Variable
> > > AbstractConv2d_gradInputs{border_mode='full', subsample=(1, 1),
> > > filter_flip=True, imshp=(None, None, None, None), kshp=(None, None,
> None,
> > > None)}.0) into Type TensorType(int64, 4D). You can try to manually
> convert
> > > AbstractConv2d_gradInputs{border_mode='full', subsample=(1, 1),
> > > filter_flip=True, imshp=(None, None, None, None), kshp=(None, None,
> None,
> > > None)}.0 into a TensorType(int64, 4D).
> > > >
> > > > Process finished with exit code 1
> > > >
> > > > --
> > > >
> > > > ---
> > > > You received this message because you are subscribed to the Google
> > > Groups "theano-users" group.
> > > > To unsubscribe from this group and stop receiving emails from it,
> send
> > > an email to [email protected].
> > > > For more options, visit https://groups.google.com/d/optout.
> > >
> > >
> > > --
> > > Pascal
> > >
> > > --
> > >
> > > ---
> > > You received this message because you are subscribed to a topic in the
> > > Google Groups "theano-users" group.
> > > To unsubscribe from this topic, visit https://groups.google.com/d/
> > > topic/theano-users/0hUbG1vy_hU/unsubscribe.
> > > To unsubscribe from this group and all its topics, send an email to
> > > [email protected].
> > > For more options, visit https://groups.google.com/d/optout.
> > >
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups "theano-users" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to [email protected].
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> Pascal
>
> --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "theano-users" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/theano-users/0hUbG1vy_hU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to