With "optimizer=None", no graph optimizations are applied, in particular:
- inplace operations are not inserted, and
- you are going to use an extremely inefficient implementation of convolution

I would advise "optimizer=fast_compile" or "optimizer=fast_run" (the
default) instead.

On Fri, Nov 11, 2016, Alberto Olivero wrote:
> I use Ubuntu 16.04
> Theano 0.8
> Lasagne 0.2dev2
> nolearn 0.61dev0
> 
> in the file .theanorc 
> [global]
> floatX=float32
> device=cpu
> optimizer=None
> 
> I set to run on the CPU just to simplify and because I have GPU with only 
> 1Gb that is not enought for this model.
> The 4Gb of RAM on the main board appear fully used during the training.
> 
> My Python come simply imported the VGG convolutional neural net availabe in 
> internet written in Lasagne that I rewrote in nolearn.
> At the time when the fit function is working 
> 
> cnn.fit(train224,target) 
> 
> I receive the following error message after the description of the net 
> provided by nolearn.
> 
> 
> # Neural Network with 119586826 learnable parameters
> 
> ## Layer information
> 
> name        size           total    cap.Y    cap.X    cov.Y    cov.X
> ----------  -----------  -------  -------  -------  -------  -------
> input       3x224x224     150528   100.00   100.00   100.00   100.00
> conv1_1     64x224x224   3211264   100.00   100.00     1.34     1.34
> conv1_2     64x224x224   3211264    60.00    60.00     2.23     2.23
> pool1       64x112x112    802816    60.00    60.00     2.23     2.23
> conv2_1     128x112x112  1605632    66.67    66.67     4.02     4.02
> conv2_2     128x112x112  1605632    46.15    46.15     5.80     5.80
> pool2       128x56x56     401408    46.15    46.15     5.80     5.80
> conv3_1     256x56x56     802816    57.14    57.14     9.38     9.38
> conv3_2     256x56x56     802816    41.38    41.38    12.95    12.95
> conv3_3     256x56x56     802816    32.43    32.43    16.52    16.52
> pool3       256x28x28     200704    32.43    32.43    16.52    16.52
> conv4_1     512x28x28     401408    45.28    45.28    23.66    23.66
> conv4_2     512x28x28     401408    34.78    34.78    30.80    30.80
> conv4_3     512x28x28     401408    28.24    28.24    37.95    37.95
> pool4       512x14x14     100352    28.24    28.24    37.95    37.95
> conv5_1     512x14x14     100352    41.03    41.03    52.23    52.23
> conv5_2     512x14x14     100352    32.21    32.21    66.52    66.52
> conv5_3     512x14x14     100352    26.52    26.52    80.80    80.80
> pool5       512x7x7        25088    26.52    26.52    80.80    80.80
> fc6         4096            4096   100.00   100.00   100.00   100.00
> fc6dropout  4096            4096   100.00   100.00   100.00   100.00
> fc7         4096            4096   100.00   100.00   100.00   100.00
> fc7dropout  4096            4096   100.00   100.00   100.00   100.00
> fc8         10                10   100.00   100.00   100.00   100.00
> 
> Explanation
>     X, Y:    image dimensions
>     cap.:    learning capacity
>     cov.:    coverage of image
>     magenta: capacity too low (<1/6)
>     cyan:    image coverage too high (>100%)
>     red:     capacity too low and coverage too high
> 
> 
> Traceback (most recent call last):
>   File "/home/alberto/pycharm-community-2016.2.3/helpers/pydev/pydevd.py", 
> line 1580, in <module>
>     globals = debugger.run(setup['file'], None, None, is_module)
>   File "/home/alberto/pycharm-community-2016.2.3/helpers/pydev/pydevd.py", 
> line 964, in run
>     pydev_imports.execfile(file, globals, locals)  # execute the script
>   File "/home/alberto/Scrivania/deep_learn/mnist_example/kagle 
> dataset_2_VGG.py", line 248, in <module>
>     cnn.fit(train224,target) # train the CNN model for n epochs
>   File 
> "/home/alberto/.local/lib/python2.7/site-packages/nolearn/lasagne/base.py", 
> line 674, in fit
>     self.train_loop(X, y, epochs=epochs)
>   File 
> "/home/alberto/.local/lib/python2.7/site-packages/nolearn/lasagne/base.py", 
> line 737, in train_loop
>     self.apply_batch_func(self.train_iter_, Xb, yb))
>   File 
> "/home/alberto/.local/lib/python2.7/site-packages/nolearn/lasagne/base.py", 
> line 828, in apply_batch_func
>     return func(Xb) if yb is None else func(Xb, yb)
>   File 
> "/home/alberto/.local/lib/python2.7/site-packages/theano/compile/function_module.py"
> , line 871, in __call__
>     storage_map=getattr(self.fn, 'storage_map', None))
>   File "/home/alberto/.local/lib/python2.7/site-packages/theano/gof/link.py"
> , line 314, in raise_with_op
>     reraise(exc_type, exc_value, exc_trace)
>   File 
> "/home/alberto/.local/lib/python2.7/site-packages/theano/compile/function_module.py"
> , line 859, in __call__
>     outputs = self.fn()
> MemoryError: 
> Apply node that caused the error: Elemwise{add,no_inplace}(AbstractConv2d{
> border_mode=(1, 1), subsample=(1, 1), filter_flip=False, imshp=(None, 3, 224
> , 224), kshp=(64, 3, 3, 3)}.0, DimShuffle{x,0,x,x}.0)
> Toposort index: 65
> Inputs types: [TensorType(float32, 4D), TensorType(float32, (True, False, 
> True, True))]
> Inputs shapes: [(128, 64, 224, 224), (1, 64, 1, 1)]
> Inputs strides: [(12845056, 200704, 896, 4), (256, 4, 4, 4)]
> Inputs values: ['not shown', 'not shown']
> Outputs clients: [[Elemwise{abs_,no_inplace}(Elemwise{add,no_inplace}.0), 
> Elemwise{add,no_inplace}(Elemwise{add,no_inplace}.0, Elemwise{abs_,
> no_inplace}.0)]]
> 
> Backtrace when the node is created(use Theano flag traceback.limit=N to 
> make it longer):
>   File "/home/alberto/pycharm-community-2016.2.3/helpers/pydev/pydevd.py", 
> line 1580, in <module>
>     globals = debugger.run(setup['file'], None, None, is_module)
>   File "/home/alberto/pycharm-community-2016.2.3/helpers/pydev/pydevd.py", 
> line 964, in run
>     pydev_imports.execfile(file, globals, locals)  # execute the script
>   File "/home/alberto/Scrivania/deep_learn/mnist_example/kagle 
> dataset_2_VGG.py", line 241, in <module>
>     cnn.initialize()
>   File 
> "/home/alberto/.local/lib/python2.7/site-packages/nolearn/lasagne/base.py", 
> line 479, in initialize
>     self.y_tensor_type,
>   File 
> "/home/alberto/.local/lib/python2.7/site-packages/nolearn/lasagne/base.py", 
> line 602, in _create_iter_funcs
>     layers, target=y_batch, **objective_kw)
>   File 
> "/home/alberto/.local/lib/python2.7/site-packages/nolearn/lasagne/base.py", 
> line 189, in objective
>     output_layer, deterministic=deterministic, **get_output_kw)
>   File 
> "/home/alberto/.local/lib/python2.7/site-packages/lasagne/layers/helper.py", 
> line 191, in get_output
>     all_outputs[layer] = layer.get_output_for(layer_inputs, **kwargs)
>   File 
> "/home/alberto/.local/lib/python2.7/site-packages/lasagne/layers/conv.py", 
> line 337, in get_output_for
>     activation = conved + self.b.dimshuffle(('x', 0) + ('x',) * self.n)
> 
> HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and 
> storage map footprint of this apply node.
> Backend TkAgg is interactive backend. Turning interactive mode on.
> We've got an error while stopping in post-mortem: <type 'exceptions.
> KeyboardInterrupt'>
> 
> 
> Process finished with exit code 1
> 
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.


-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to