I'm using nnet.bn.batch_normalization_train() and 
nnet.bn.batch_normalization_test() for batch normalization, however during 
test phase, nnet.bn.batch_normalization_test() produces wrong results. For 
the time being, I just use nnet.bn.batch_normalization_train() with 
*running_average_factor 
*set to zero for test phase as:

if deterministic is False:  # train phase
    normalized, input_mean, input_inv_std, self.mean, self.var = 
T.nnet.bn.batch_normalization_train(input, self.gamma, self.beta, self.axes,
                                                                                
                     self.epsilon, self.alpha, self.mean, self.var)
else: # test phase
    # normalized = T.nnet.bn.batch_normalization_test(input, self.gamma, 
self.beta, self.mean, self.var, self.axes, self.epsilon)
    normalized, _, _, _, _ = T.nnet.bn.batch_normalization_train(input, 
self.gamma, self.beta, self.axes, self.epsilon, 0.0, self.mean, self.var)
return normalized



My theano version is 
'0.9.0beta1.dev-b2afa088d1cb416b4507348019af34adae908b73', CUDA 8.0 and 
CuDNN 5.1

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to