[theano-users] Re: Input error(extra dimensions) at theano.function

2016-09-06 Thread Jesse Livezey
Your inputs should be tensor4 rather than matrix if you're passing them 
into a CNN.

On Tuesday, September 6, 2016 at 7:22:38 AM UTC-7, Ganesh Iyer wrote:
>
>  
> Hi guys, 
>
> I'm new to this group and theano in general. I'm trying to send 2 image 
> patches, greyscale ( 2D numpy arrays of size (9,9) using cv2.imread(name,0) 
> ) through a CNN architecture. I'm giving these as inputs to theano.function.
>
> train_set_left=np.float64(train_set_left)
> train_set_right_positive=np.float64(train_set_right_positive)
>
> train_model=theano.function(inputs=[input_left,input_right] 
> ,outputs=[s_plus])
> print(train_model(train_set_left,train_set_right_positive))
>
> The error I get at this point is:
>
> at index 0(0-based)', 'Wrong number of dimensions: expected 4, got 2 with 
> shape (9, 9).')
>
>
> input_left and input_right are defined earlier in the code as:
>
> input_left=T.dmatrix('input_left')
> input_right=T.dmatrix('input_right')
>
> Is there something wrong with the input dimensions in this case?
>
> Full Code: http://pastebin.com/33fTyb3K
> The code itself is based on the LeNet tutorial, but is a bit messy. 
>
> Please Help.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: error 4D input data

2016-09-06 Thread Jesse Livezey
One of your labels is too large, or possibly too small. Are your labels 
from 0 to n-1 or 1 to n? They should be 0 to n-1.

On Tuesday, September 6, 2016 at 2:19:18 AM UTC-7, Beatriz G. wrote:
>
> HI everyone
>
> I am trying to use 4 dimension image, but I get the following error and I 
> do not know what it means:
>
> ValueError: y_i value out of bounds
> Apply node that caused the error: 
> CrossentropySoftmaxArgmax1HotWithBias(Dot22.0, b, Subtensor{int64:int64:}.0)
> Toposort index: 34
> Inputs types: [TensorType(float64, matrix), TensorType(float64, vector), 
> TensorType(int32, vector)]
> Inputs shapes: [(20, 4), (4,), (20,)]
> Inputs strides: [(32, 8), (8,), (4,)]
> Inputs values: ['not shown', array([ 0.,  0.,  0.,  0.]), 'not shown']
> Outputs clients: 
> [[Sum{acc_dtype=float64}(CrossentropySoftmaxArgmax1HotWithBias.0)], 
> [CrossentropySoftmax1HotWithBiasDx(Elemwise{Inv}[(0, 0)].0, 
> CrossentropySoftmaxArgmax1HotWithBias.1, Subtensor{int64:int64:}.0)], []]
>
> Backtrace when the node is created(use Theano flag traceback.limit=N to 
> make it longer):
>   File "/home/beaa/Escritorio/Theano/Separando_Lenet.py", line 446, in 
> 
> evaluate_lenet5()
>   File "/home/beaa/Escritorio/Theano/Separando_Lenet.py", line 257, in 
> evaluate_lenet5
> cost = layer3.negative_log_likelihood(y)
>   File "/home/beaa/Escritorio/Theano/logistic_sgd.py", line 146, in 
> negative_log_likelihood
> return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])
>
> HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and 
> storage map footprint of this apply node.
>
>
> Here is how I give the data to the layers:
>
>
> layer0 = LeNetConvPoolLayer(
> rng,
> input=layer0_input,
> image_shape=(batch_size, 4, 104, 52),
> filter_shape=(nkerns[0], 4, 5, 5),
> poolsize=(2, 2)
> )
>
>
> layer1 = LeNetConvPoolLayer(
> rng,
> input=layer0.output,
> image_shape=(batch_size, nkerns[0], 50, 24),
> filter_shape=(nkerns[1], nkerns[0], 5, 5),
> poolsize=(2, 2)
>
>
>
> My data is 104*52*4.
>
>
> Thanks in advance. Regards.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Remove the validation step for trying out theano-CNN

2016-09-06 Thread Mallika Agarwal
Hello, 

I have a dataset divided into just a train and test set. Is there a way I 
can skip the "validation" part? 

Could someone guide me on how to? Because the part where the validation 
score is checked, I can't simply remove that, can I?

if (iter + 1) % validation_frequency == 0:

# compute zero-one loss on validation set
validation_losses = [validate_model(i) for i
 in range(n_valid_batches)]
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f %%' %
  (epoch, minibatch_index + 1, n_train_batches,
   this_validation_loss * 100.))

# if we got the best validation score until now
if this_validation_loss < best_validation_loss:

#improve patience if loss improvement is good enough
if this_validation_loss < best_validation_loss *  \
   improvement_threshold:
patience = max(patience, iter * patience_increase)

# save best validation score and iteration number
best_validation_loss = this_validation_loss
best_iter = iter

# test it on the test set
test_losses = [
test_model(i)
for i in range(n_test_batches)
]
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of '
   'best model %f %%') %
  (epoch, minibatch_index + 1, n_train_batches,
   test_score * 100.))

This is from convolutional_mlp.py. 

Thanks in anticipation!

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.