Hi guys,

I think I fixed the bug in keras. Here is the corresponding pull request 
: https://github.com/fchollet/keras/pull/3968

Best,
Aloïs

Le mercredi 5 octobre 2016 23:47:00 UTC+2, Pascal Lamblin a écrit :
>
> On Wed, Oct 05, 2016, Daπid wrote: 
> > On 5 October 2016 at 21:01, Pascal Lamblin <[email protected] 
> <javascript:>> wrote: 
> > > Just a hunch: is it possible that the "axis=2" parameter of 
> > > BatchNormalization has to be changed between TF an Theano, since they 
> > > may not use the same memory layout for convolutions? 
> > 
> > I don't think so, the layout is fixed between backends, and I am sure 
> > it is correct because the number of parameters is what I would expect. 
> > Using axis=1 throws an error in Keras (before ever dispatching the 
> > backend), since its dimension is None. 
>
> Then, I don't know. It may be an issue in the gradient of some operation 
> in Theano. Can you try with test values and pdb, to try to pinpoint 
> which gradient operation inserts the tensor with a wrong size? 
>
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "theano-users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected] <javascript:>. 
> > For more options, visit https://groups.google.com/d/optout. 
>
> -- 
> Pascal 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to