Fred, 
I entered cnmem = 1 in .theanorc with float16, but no message is showed as 
with float32 (CNMeM is enabled with initial size: 95.0% of memory)  and the 
speed  has not been improved.
These are the outputs:


USING FLOAT16

.theanorc:

[global]
floatX = float16
device = cuda

[lib]
cnmem=1

[cuda] 
root = /usr/local/cuda-7.5


[nvcc]
fastmath=True

optimizer = fast_compile

output:

Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> runfile('/run/media/luca/8C9A-AEF4/core/run_multi_conv.py', 
wdir='/run/media/luca/8C9A-AEF4/core')
Mapped name None to device cuda: Tesla K40c
Using cuDNN version 5005 on context None
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for RandomFunction{binomial} due to unsupported float16
Disabling C code for RandomFunction{binomial} due to unsupported float16
Disabling C code for RandomFunction{binomial} due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for IncDiagonalSubtensor due to unsupported float16
Disabling C code for RandomFunction{binomial} due to unsupported float16
Disabling C code for RandomFunction{binomial} due to unsupported float16
Disabling C code for RandomFunction{binomial} due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for MaxAndArgmax due to unsupported float16


start time:
29/08/2016
11:30:44


images for training: 574
images for validation: 102
epochs: 1000


... training neural network 33


training @ iter =  0
------------------------

USING FLOAT32
.theanorc:

[global]
floatX = float32
device = gpu

[lib]
cnmem=1

[cuda] 
root = /usr/local/cuda-7.5


[nvcc]
fastmath=True

optimizer = fast_compile

output:
Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> runfile('/run/media/luca/8C9A-AEF4/core/run_multi_conv.py', 
wdir='/run/media/luca/8C9A-AEF4/core')
Using gpu device 0: Tesla K40c (CNMeM is enabled with initial size: 95.0% 
of memory, cuDNN 5005)
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")


start time:
29/08/2016
11:32:43


images for training: 574
images for validation: 102
epochs: 1000


... training neural network 33


training @ iter =  0



-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to