Fred,

I run the convnet another time using nanguardmode and I had not an illegal 
memory access.
The training cost start as  0.69336 and it doesn't change.

This is the output:

Mapped name None to device cuda: GeForce 840M
Using cuDNN version 5005 on context None
/home/luca/data/Theano-master/
theano/tensor/signal/downsample.py:6: UserWarning: downsample module has 
been moved to the theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
Using gpu device 0: GeForce 840M (CNMeM is disabled, cuDNN 5005)
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for Pool{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Pool{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for MaxPoolGrad{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for MaxPoolGrad{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for IncDiagonalSubtensor due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for Pool{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Pool{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for MaxAndArgmax due to unsupported float16


start time:
26/07/2016
11:16:52


images for training: 288
images for validation: 50
epochs: 300


... training neural network 25


training @ iter =  0
training @ iter =  200


training cost 0.69336
epoch 1, training batch 288/288,validation error 50.000 %
training @ iter =  400


training cost 0.69336
epoch 2, training batch 288/288,validation error 50.000 %
training @ iter =  600
training @ iter =  800


training cost 0.69336
epoch 3, training batch 288/288,validation error 50.000 %

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to