NRauschmayr opened a new issue #14091: BatchNorm error when 
use_global_stats=True
URL: https://github.com/apache/incubator-mxnet/issues/14091
 
 
   
   
   ## Description
   BatchNorm crashes with the following error when use_global_stats=True
   
   ## Environment info (Required)
   ```
   ----------Python Info----------
   Version      : 3.6.5
   Compiler     : GCC 7.2.0
   Build        : ('default', 'Apr 29 2018 16:14:56')
   Arch         : ('64bit', '')
   ------------Pip Info-----------
   Version      : 10.0.1
   Directory    : 
/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/pip
   ----------MXNet Info-----------
   Version      : 1.3.1
   Directory    : 
/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet
   Commit Hash   : 19c501680183237d52a862e6ae1dc4ddc296305b
   ----------System Info----------
   Platform     : Linux-4.4.0-1072-aws-x86_64-with-debian-stretch-sid
   system       : Linux
   node         : ip-172-31-24-131
   release      : 4.4.0-1072-aws
   version      : #82-Ubuntu SMP Fri Nov 2 15:00:21 UTC 2018
   ----------Hardware Info----------
   machine      : x86_64
   processor    : x86_64
   Architecture:          x86_64
   CPU op-mode(s):        32-bit, 64-bit
   Byte Order:            Little Endian
   CPU(s):                4
   On-line CPU(s) list:   0-3
   Thread(s) per core:    2
   Core(s) per socket:    2
   Socket(s):             1
   NUMA node(s):          1
   Vendor ID:             GenuineIntel
   CPU family:            6
   Model:                 79
   Model name:            Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
   Stepping:              1
   CPU MHz:               2699.894
   CPU max MHz:           3000.0000
   CPU min MHz:           1200.0000
   BogoMIPS:              4600.10
   Hypervisor vendor:     Xen
   Virtualization type:   full
   L1d cache:             32K
   L1i cache:             32K
   L2 cache:              256K
   L3 cache:              46080K
   NUMA node0 CPU(s):     0-3
   Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm 
constant_tsc rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq ssse3 
fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave 
avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single kaiser 
fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt
   ```
   
   ## Error Message:
   ```
   Traceback (most recent call last):
     File "test.py", line 37, in <module>
       print (mx.nd.mean(loss).asscalar())
     File 
"/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py",
 line 1990, in asscalar
       return self.asnumpy()[0]
     File 
"/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py",
 line 1972, in asnumpy
       ctypes.c_size_t(data.size)))
     File 
"/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/base.py",
 line 251, in check_call
       raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: [00:59:26] 
/home/ubuntu/incubator-mxnet/src/operator/nn/batch_norm.cu:572: Check failed: 
err == cudaSuccess (7 vs. 0) Name: BatchNormalizationBackward ErrStr:too many 
resources requested for launch
   ```
   ## Minimum reproducible example
   ```
   from __future__ import print_function
   import mxnet as mx
   from mxnet import nd, autograd
   from mxnet import gluon
   import numpy as np
   
   ctx = mx.gpu()
   batch_size = 64
   
   def transform(data, label):
       return nd.transpose(data.astype(np.float32), (2,0,1))/255, 
label.astype(np.float32)
   train_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=True, 
transform=transform),  batch_size, shuffle=True)
   
   net = gluon.nn.Sequential()
   with net.name_scope():
       net.add(gluon.nn.Conv2D(channels=3, kernel_size=5))
       net.add(gluon.nn.BatchNorm(use_global_stats=True))
       net.add(gluon.nn.Activation(activation='relu'))
       net.add(gluon.nn.Dense(10))
   
   net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx)
   softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
   trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': .1})
   
   for e in range(5):
       for i, (data, label) in enumerate(train_data):
           data = data.as_in_context(ctx)
           label = label.as_in_context(ctx)
           with autograd.record(train_mode=True):
               output = net(data)
               loss = softmax_cross_entropy(output, label)
           loss.backward()
           trainer.step(data.shape[0])
   ```
   
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to