Fred,
I checked cost and regularization terms because If I don't put 
regularizations the cost doesn't became Nans but anyway the  value start as 
training cost = 0.69336 and during iterations it doesn't change at all.
Also using ignore_border=False or ignore_border=True doesn't solve the bug.
This is my class  LogisticRegression ( using Activation =T.nnet.sigmoid)

class LogisticRegression(object):
    """ Logistic Regression Layer, Top layer, Softmax layer, Output layer 
"""

    def __init__(self, input, n_in, n_out, rng, layer_name, 
activation,L1_reg, L2_reg,
        W, b, borrow=True, ):
                       
        # if trained
        if W != None: 
            with 
open('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/params/layer6_w_0.pkl')
 
as f:
                W = cPickle.load(f)                                      
            self.W = shared(W, name=layer_name+"_W", borrow=borrow)
            
        elif activation == T.nnet.softplus: 
            W_val = _asarray(rng.normal(loc=0, scale=0.01, 
                size=(n_in, n_out)), dtype=floatX)
            self.W = shared(W_val, name=layer_name+"_W", borrow=borrow)
        else:
            self.W = shared(zeros((n_in, n_out), dtype=floatX), 
                name=layer_name+"_W",
                borrow=True)
                
                
        # L1 norm ; one regularization option is to enforce L1 norm to
        # be small
        self.L1 = (
            abs(self.W).sum()
                   )


        # square of L2 norm ; one regularization option is to enforce
        # square of L2 norm to be small
        self.L2_sqr = (
            (self.W ** 2).sum()
                  )

        # Bias vector
        test= np.any(b)
        if test==True: 
            with 
open('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/params/layer6_b_0.pkl')
 
as f:
                b = cPickle.load(f)           
            self.b = shared(b, name=layer_name+"_b", borrow=borrow)
        
        elif activation == T.nnet.softplus:
            b_val = ones((n_out,), dtype=floatX)
            self.b = shared(value=b_val, borrow=True)
        else:
            self.b = shared(zeros((n_out,), dtype=floatX),
                name=layer_name+"_b",
                borrow=True)
         
        self.L1_reg = L1_reg
        self.L2_reg = L2_reg

        # Vector of prediction probabilities
        self.p_y_given_x = softmax(T.dot(input, self.W) + self.b)
        # Prediction
        self.y_pred = T.argmax(self.p_y_given_x, axis=1)
        # Parameters of the model
        self.params = [self.W, self.b]
        
        # keep track of model input
        self.input = input

    def cost(self, y):
            regularization = self.L1_reg * self.L1 + self.L2_reg 
*self.L2_sqr
            """  regularized  Cost function"""
            return (-T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), 
y]) + regularization)
            #return (-T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), 
y]) )

    def errors(self, y):
        """ Errors over the total number of examples (in the minibatch) """
        return T.mean(T.neq(self.y_pred, y))
        
    def accuracy(self, y):
        " accuracy over the total number of examples (in the minibatch) "
        return T.mean(T.eq(self.y_pred, y))

--------


 I run the code with flag mode=DebugMode.
.theanorc is
[global]
floatX = float16
device=cuda
[cuda] 
root = /usr/local/cuda-7.5


[nvcc]
fastmath=True

optimizer = fast_compile

[DebugMode]
check_py=False



It raised this error: "ValueError: convolve2d not available for this type." 

This is the output:

Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 18:08:32) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
Mapped name None to device cuda: GeForce 840M
Using cuDNN version 5005 on context None
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for Pool{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Pool{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for MaxPoolGrad{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for MaxPoolGrad{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for IncDiagonalSubtensor due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for Elemwise{sub,no_inplace} due to unsupported float16
Disabling C code for Elemwise{sub,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{pow,no_inplace} due to unsupported float16
Disabling C code for Elemwise{abs_,no_inplace} due to unsupported float16
Disabling C code for Elemwise{sqr,no_inplace} due to unsupported float16
Disabling C code for Elemwise{sgn} due to unsupported float16
Disabling C code for Elemwise{pow} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{pow,no_inplace} due to unsupported float16
Disabling C code for Elemwise{pow} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for Elemwise{sub,no_inplace} due to unsupported float16
Disabling C code for Elemwise{second,no_inplace} due to unsupported float16
Disabling C code for Elemwise{neg} due to unsupported float16
Disabling C code for Sum{acc_dtype=float32} due to unsupported float16
Disabling C code for Sum{acc_dtype=float32} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{second,no_inplace} due to unsupported float16
Disabling C code for mrg_uniform{TensorType(float16, matrix),no_inplace} 
due to unsupported float16
Disabling C code for mrg_uniform{TensorType(float16, matrix),no_inplace} 
due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{lt,no_inplace} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{lt,no_inplace} due to unsupported float16
Disabling C code for AbstractConv2d{border_mode='valid', subsample=(1, 1), 
filter_flip=True, imshp=(20, 1, 20, 20), kshp=(100, 1, 5, 5), 
filter_dilation=(1, 1)} due to unsupported float16
Disabling C code for Elemwise{add,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{add,no_inplace} due to unsupported float16
Disabling C code for Elemwise{add,no_inplace} due to unsupported float16
Disabling C code for Elemwise{lt,no_inplace} due to unsupported float16
Disabling C code for Elemwise{lt,no_inplace} due to unsupported float16
Disabling C code for AbstractConv2d{border_mode='valid', subsample=(1, 1), 
filter_flip=True, imshp=(20, 1, 20, 20), kshp=(100, 1, 5, 5), 
filter_dilation=(1, 1)} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for AdvancedIncSubtensor{inplace=False,  
set_instead_of_inc=False} due to unsupported float16
Disabling C code for Elemwise{neg,no_inplace} due to unsupported float16
Disabling C code for Elemwise{neg,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for DiagonalSubtensor due to unsupported float16
Disabling C code for Elemwise{second,no_inplace} due to unsupported float16
Disabling C code for AdvancedIncSubtensor{inplace=False,  
set_instead_of_inc=False} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Sum{axis=[3], acc_dtype=float32} due to unsupported 
float16
Disabling C code for Elemwise{add,no_inplace} due to unsupported float16
Disabling C code for sigmoid due to unsupported float16
Disabling C code for Elemwise{scalar_sigmoid} due to unsupported float16
Disabling C code for Elemwise{scalar_sigmoid} due to unsupported float16
Disabling C code for Elemwise{sub} due to unsupported float16
Disabling C code for DiagonalSubtensor due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for Sum{axis=[3], acc_dtype=float32} due to unsupported 
float16
Disabling C code for Pool{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for sigmoid due to unsupported float16
Disabling C code for Pool{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Pool{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Pool{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Pool{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Pool{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for dot due to unsupported float16
Disabling C code for Elemwise{add,no_inplace} due to unsupported float16
Disabling C code for sigmoid due to unsupported float16
Disabling C code for Elemwise{scalar_sigmoid} due to unsupported float16
Disabling C code for Elemwise{scalar_sigmoid} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{sub} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for dot due to unsupported float16
Disabling C code for Elemwise{add,no_inplace} due to unsupported float16
Disabling C code for CrossentropySoftmaxArgmax1HotWithBias due to 
unsupported float16
Disabling C code for CrossentropySoftmaxArgmax1HotWithBias due to 
unsupported float16
Disabling C code for SoftmaxWithBias due to unsupported float16
Disabling C code for OutputGuard due to unsupported float16
Disabling C code for Softmax due to unsupported float16
Disabling C code for Softmax due to unsupported float16
Disabling C code for Elemwise{neg,no_inplace} due to unsupported float16
Disabling C code for Sum{acc_dtype=float32} due to unsupported float16
Disabling C code for CrossentropySoftmax1HotWithBiasDx due to unsupported 
float16
Disabling C code for OutputGuard due to unsupported float16
Disabling C code for Elemwise{log,no_inplace} due to unsupported float16
Disabling C code for Elemwise{true_div} due to unsupported float16
Disabling C code for Sum{acc_dtype=float32} due to unsupported float16
Disabling C code for Elemwise{neg,no_inplace} due to unsupported float16
Disabling C code for dot due to unsupported float16
Disabling C code for dot due to unsupported float16
Disabling C code for Sum{axis=[0], acc_dtype=float32} due to unsupported 
float16
Disabling C code for OutputGuard due to unsupported float16
Disabling C code for AdvancedSubtensor due to unsupported float16
Disabling C code for Elemwise{second,no_inplace} due to unsupported float16
Disabling C code for SoftmaxGrad due to unsupported float16
Disabling C code for Elemwise{true_div,no_inplace} due to unsupported 
float16
Disabling C code for Elemwise{mul} due to unsupported float16
Disabling C code for Elemwise{add,no_inplace} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{identity} due to unsupported float16
Disabling C code for Elemwise{second} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{mul} due to unsupported float16
Disabling C code for Elemwise{add,no_inplace} due to unsupported float16
Disabling C code for Elemwise{identity} due to unsupported float16
Disabling C code for Elemwise{sub,no_inplace} due to unsupported float16
Disabling C code for Elemwise{neg,no_inplace} due to unsupported float16
Disabling C code for Elemwise{neg,no_inplace} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{mul} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for OutputGuard due to unsupported float16
Disabling C code for OutputGuard due to unsupported float16
Disabling C code for Elemwise{add,no_inplace} due to unsupported float16
Disabling C code for Elemwise{mul} due to unsupported float16
Disabling C code for Elemwise{sub,no_inplace} due to unsupported float16
Disabling C code for Elemwise{second,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{neg} due to unsupported float16
Disabling C code for Elemwise{mul} due to unsupported float16
Disabling C code for Elemwise{mul} due to unsupported float16
Disabling C code for dot due to unsupported float16
Disabling C code for dot due to unsupported float16
Disabling C code for Sum{axis=[0], acc_dtype=float32} due to unsupported 
float16
Disabling C code for MaxPoolGrad{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{identity} due to unsupported float16
Disabling C code for Elemwise{second} due to unsupported float16
Disabling C code for Elemwise{second} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{mul} due to unsupported float16
Disabling C code for Elemwise{identity} due to unsupported float16
Disabling C code for Elemwise{sub,no_inplace} due to unsupported float16
Disabling C code for Elemwise{identity} due to unsupported float16
Disabling C code for Elemwise{sub,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{mul} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{mul} due to unsupported float16
Disabling C code for Elemwise{identity} due to unsupported float16
Disabling C code for Elemwise{mul} due to unsupported float16
Disabling C code for MaxPoolGrad{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for MaxPoolGrad{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Elemwise{identity} due to unsupported float16
Disabling C code for Elemwise{identity} due to unsupported float16
Disabling C code for MaxPoolGrad{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Elemwise{mul} due to unsupported float16
Disabling C code for Elemwise{mul} due to unsupported float16
Disabling C code for IncDiagonalSubtensor due to unsupported float16
Disabling C code for IncDiagonalSubtensor{inplace} due to unsupported 
float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for OutputGuard due to unsupported float16
Disabling C code for Sum{axis=[0], acc_dtype=float32} due to unsupported 
float16
Disabling C code for Sum{axis=[0, 2, 3, 4], acc_dtype=float32} due to 
unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for Elemwise{identity} due to unsupported float16
Disabling C code for Sum{axis=[1, 2, 3], acc_dtype=float32} due to 
unsupported float16
Disabling C code for Elemwise{identity} due to unsupported float16
Disabling C code for Elemwise{second} due to unsupported float16
Disabling C code for Elemwise{identity} due to unsupported float16
Disabling C code for IncDiagonalSubtensor due to unsupported float16
Disabling C code for Elemwise{identity} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{sub,no_inplace} due to unsupported float16
Disabling C code for AbstractConv2d_gradWeights{border_mode='valid', 
subsample=(1, 1), filter_flip=True, imshp=(20, 1, 20, 20), kshp=(100, 1, 5, 
5), filter_dilation=(1, 1)} due to unsupported float16
Disabling C code for AbstractConv2d_gradWeights{border_mode='valid', 
subsample=(1, 1), filter_flip=True, imshp=(20, 1, 20, 20), kshp=(100, 1, 5, 
5), filter_dilation=(1, 1)} due to unsupported float16
Disabling C code for Elemwise{identity} due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{identity} due to unsupported float16
Disabling C code for Elemwise{sub,no_inplace} due to unsupported float16
Disabling C code for OutputGuard due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for Pool{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Pool{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for MaxAndArgmax due to unsupported float16


start time:
26/07/2016
10:22:26


images for training: 382
images for validation: 68
epochs: 300


... training neural network 25


training @ iter =  0
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
 
line 714, in runfile
    execfile(filename, namespace)
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
 
line 81, in execfile
    builtins.execfile(filename, *where)
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py",
 
line 97, in <module>
    run_experiments()
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py",
 
line 60, in run_experiments
    Pretrained = False
  File "mpr_convnet_class.py", line 339, in __init__
    training_cost_ij=train_model(a, b) 
  File "/home/luca/data/Theano-master/theano/compile/function_module.py", 
line 862, in __call__
    self.fn() if output_subset is None else\
  File "/home/luca/data/Theano-master/theano/compile/debugmode.py", line 
2305, in deco
    return f()
  File "/home/luca/data/Theano-master/theano/compile/debugmode.py", line 
2008, in f
    thunk_py()
  File "/home/luca/data/Theano-master/theano/gof/op.py", line 908, in rval
    r = p(n, [x[0] for x in i], o)
  File "/home/luca/data/Theano-master/theano/tensor/nnet/abstract_conv.py", 
line 848, in perform
    conv_out = self.conv2d(img, kern, mode="valid", 
dilation=self.filter_dilation)
  File "/home/luca/data/Theano-master/theano/tensor/nnet/abstract_conv.py", 
line 776, in conv2d
    1, val, bval, 0)
ValueError: convolve2d not available for this type.
>>> 

------------

Then I tried to use nanguardmode:

from theano.compile.nanguardmode import NanGuardMode
train_model = theano.function([x,y],cost, updates=updates, 
mode=NanGuardMode(nan_is_error=True, inf_is_error=True, big_is_error=True ) )

This is the output that raises an error: 

Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 18:08:32) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py',
>>>  
>>> wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
Mapped name None to device cuda: GeForce 840M
Using cuDNN version 5005 on context None
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the theano.tensor.signal.pool 
module.
  "downsample module has been moved to the theano.tensor.signal.pool module.")
Using gpu device 0: GeForce 840M (CNMeM is disabled, cuDNN 5005)
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for Pool{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Pool{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for MaxPoolGrad{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for MaxPoolGrad{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for IncDiagonalSubtensor due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for Pool{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Pool{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for MaxAndArgmax due to unsupported float16


start time:
26/07/2016
10:38:08


images for training: 288
images for validation: 50
epochs: 300


... training neural network 25


training @ iter =  0
training @ iter =  200


training cost 0.69336
epoch 1, training batch 288/288,validation error 50.000 %
training @ iter =  400
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
 line 714, in runfile
    execfile(filename, namespace)
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
 line 81, in execfile
    builtins.execfile(filename, *where)
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py",
 line 97, in <module>
    run_experiments()
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py",
 line 60, in run_experiments
    Pretrained = False
  File "mpr_convnet_class.py", line 337, in __init__
    training_cost_ij=train_model(a, b) 
  File "/home/luca/data/Theano-master/theano/compile/function_module.py", line 
862, in __call__
    self.fn() if output_subset is None else\
  File "/home/luca/data/Theano-master/theano/gof/vm.py", line 509, in __call__
    storage_map=storage_map)
  File "/home/luca/data/Theano-master/theano/gof/link.py", line 325, in 
raise_with_op
    reraise(exc_type, exc_value, exc_trace)
  File "/home/luca/data/Theano-master/theano/gof/vm.py", line 478, in __call__
    _, dt = self.run_thunk_of_node(current_apply)
  File "/home/luca/data/Theano-master/theano/gof/vm.py", line 401, in 
run_thunk_of_node
    compute_map=self.compute_map,
  File "/home/luca/data/Theano-master/theano/compile/nanguardmode.py", line 
280, in nan_check
    do_check_on(storage_map[var][0], node)
  File "/home/luca/data/Theano-master/theano/compile/nanguardmode.py", line 
239, in do_check_on
    if contains_inf(var, nd):
  File "/home/luca/data/Theano-master/theano/compile/nanguardmode.py", line 
133, in contains_inf
    return np.isinf(np.nanmax(arr)) or np.isinf(np.nanmin(arr))
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/numpy/lib/nanfunctions.py", 
line 324, in nanmax
    res = np.fmax.reduce(a, axis=axis, out=out, keepdims=keepdims)
  File "pygpu/gpuarray.pyx", line 1476, in pygpu.gpuarray.GpuArray.__array__ 
(pygpu/gpuarray.c:19232)
  File "pygpu/gpuarray.pyx", line 1299, in pygpu.gpuarray.pygpu_as_ndarray 
(pygpu/gpuarray.c:17019)
  File "pygpu/gpuarray.pyx", line 346, in pygpu.gpuarray.array_read 
(pygpu/gpuarray.c:6064)
pygpu.gpuarray.GpuArrayException: an illegal memory access was encountered
Apply node that caused the error: 
InplaceGpuDimShuffle{0,1,3,4,2}(GpuReshape{5}.0)
Toposort index: 159
Inputs types: [GpuArrayType<None>(float16, (True, False, False, False, False))]
Inputs shapes: [(1, 20, 8, 8, 8)]
Inputs strides: [(20480, 1024, 128, 16, 2)]
Inputs values: ['not shown']
Outputs clients: [[GpuReshape{4}(InplaceGpuDimShuffle{0,1,3,4,2}.0, 
MakeVector{dtype='int64'}.0)]]

HINT: Re-running with most Theano optimization disabled could give you a 
back-trace of when this node was created. This can be done with by setting the 
Theano flag 'optimizer=fast_compile'. If that does not work, Theano 
optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and 
storage map footprint of this apply node.
>>>


---

Many thanks for your help

Luca

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to