Just a hunch: is it possible that the "axis=2" parameter of
BatchNormalization has to be changed between TF an Theano, since they
may not use the same memory layout for convolutions?

On Wed, Oct 05, 2016, David Menéndez Hurtado wrote:
> Hi,
> 
> I am using Keras to create a a fully convolutional network with a
> convolutional batch normalisation. The following example works in the
> Tensorflow backend, but fails with Theano. Theano works if I either
> remove the BN layer, make a fixed width, or run it on the CPU.
> 
> Is the bug in Keras or in Theano?
> 
> Here is the Theano call
> https://github.com/fchollet/keras/blob/master/keras/backend/theano_backend.py#L427
> 
> and here is a minimal example:
> 
> from __future__ import division, print_function
> import os
> #os.environ['KERAS_BACKEND'] = 'tensorflow'
> os.environ['KERAS_BACKEND'] = 'theano'
> 
> import numpy as np
> 
> from keras.models import Model
> from keras.layers import Dropout, Input
> from keras.layers import Convolution1D, BatchNormalization
> from keras.layers.advanced_
> activations import LeakyReLU
> 
> 
> input_layer = Input(shape=(None, 15))
> 
> layer = Convolution1D(filter_length=3, nb_filter=4,
> border_mode='same',  bias=False)(input_layer)
> layer = BatchNormalization(mode=0, axis=2)(layer)
> layer = Dropout(0.3)(layer)
> layer = LeakyReLU()(layer)
> 
> output = Convolution1D(filter_length=3, nb_filter=1, 
> border_mode='same')(layer)
> model = Model(input=input_layer, output=output)
> model.compile('adam', 'binary_crossentropy')
> 
> model.summary()
> 
> N_CASES = 100
> WIDTH = 20
> X = np.random.random((N_CASES, WIDTH, 15))
> y = np.random.random((N_CASES, WIDTH, 1))
> model.fit(X, y, batch_size=8)
> 
> 
> Traceback from Theano master:
> 
> Epoch 1/10
> Traceback (most recent call last):
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/compile/function_module.py",
>  
> line 866, in __call__
>     self.fn() if output_subset is None else\
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gof/op.py", 
> line 866, in rval
>     r = p(n, [x[0] for x in i], o)
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/compile/ops.py",
>  
> line 708, in perform
>     (axis, x.shape[axis]))
> ValueError: Dimension 2 in Rebroadcast's input was supposed to be 1 (got 4 
> instead)
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "minimal.py", line 30, in <module>
>     model.fit(X, y, batch_size=8)
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/keras/engine/training.py",
>  
> line 1106, in fit
>     callback_metrics=callback_metrics)
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/keras/engine/training.py",
>  
> line 824, in _fit_loop
>     outs = f(ins_batch)
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/keras/backend/theano_backend.py",
>  
> line 717, in __call__
>     return self.function(*inputs)
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/compile/function_module.py",
>  
> line 879, in __call__
>     storage_map=getattr(self.fn, 'storage_map', None))
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gof/link.py",
>  
> line 325, in raise_with_op
>     reraise(exc_type, exc_value, exc_trace)
>   File "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/six.py", 
> line 685, in reraise
>     raise value.with_traceback(tb)
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/compile/function_module.py",
>  
> line 866, in __call__
>     self.fn() if output_subset is None else\
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gof/op.py", 
> line 866, in rval
>     r = p(n, [x[0] for x in i], o)
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/compile/ops.py",
>  
> line 708, in perform
>     (axis, x.shape[axis]))
> ValueError: Dimension 2 in Rebroadcast's input was supposed to be 1 (got 4 
> instead)
> Apply node that caused the error: Rebroadcast{?,?,1}(GpuContiguous.0)
> Toposort index: 148
> Inputs types: [CudaNdarrayType(float32, (True, True, False, True))]
> Inputs shapes: [(1, 1, 4, 1)]
> Inputs strides: [(0, 0, 1, 0)]
> Inputs values: [b'CudaNdarray([[[[-0.24820676]\n   [-0.05651059]\n   [ 
> 0.00497265]\n   [ 0.1637198 ]]]])']
> Outputs clients: [[GpuDimShuffle{0,1,2}(Rebroadcast{?,?,1}.0)]]
> 
> Backtrace when the node is created(use Theano flag traceback.limit=N to 
> make it longer):
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 1271, in access_grad_cache
>     term = access_term_cache(node)[idx]
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 965, in access_term_cache
>     output_grads = [access_grad_cache(var) for var in node.outputs]
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 965, in <listcomp>
>     output_grads = [access_grad_cache(var) for var in node.outputs]
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 1271, in access_grad_cache
>     term = access_term_cache(node)[idx]
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 965, in access_term_cache
>     output_grads = [access_grad_cache(var) for var in node.outputs]
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 965, in <listcomp>
>     output_grads = [access_grad_cache(var) for var in node.outputs]
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 1271, in access_grad_cache
>     term = access_term_cache(node)[idx]
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 1105, in access_term_cache
>     input_grads = node.op.grad(inputs, new_output_grads)
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 1271, in access_grad_cache
>     term = access_term_cache(node)[idx]
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 965, in access_term_cache
>     output_grads = [access_grad_cache(var) for var in node.outputs]
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 965, in <listcomp>
>     output_grads = [access_grad_cache(var) for var in node.outputs]
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 1271, in access_grad_cache
>     term = access_term_cache(node)[idx]
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 965, in access_term_cache
>     output_grads = [access_grad_cache(var) for var in node.outputs]
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 965, in <listcomp>
>     output_grads = [access_grad_cache(var) for var in node.outputs]
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 1271, in access_grad_cache
>     term = access_term_cache(node)[idx]
>   File 
> "/home/david/.virtualenvs/py35/lib/python3.5/site-packages/theano/gradient.py",
>  
> line 1105, in access_term_cache
>     input_grads = node.op.grad(inputs, new_output_grads)
> 
> HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and 
> storage map footprint of this apply node.
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.


-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to