7oud commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368716803
@thbupt batch size in training is 8, and in inference is usually 1.
This is
7oud commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368714486
@tornadomeet it seems that, but I cannot give the conclusion, bcz the
dataset is too small to giving truth
7oud commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368713958
@thbupt Actually I did like what you said, but the same data batch has
different output when using forward(is_train=False) and
7oud commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368709872
@thbupt I found in some small dataset training tasks such as segmentation,
the inference result is worse than training when using BatchNorm
7oud commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368320260
@szha @tornadomeet if training with use_global_stats=True, it seemed all the
moving_mean = 0 and moving_var = 1 in the trained model, is is