Ishitori opened a new issue #12662: Memory leak when passing images of 
different dimensions with MXNET_CUDNN_AUTOTUNE_DEFAULT
URL: https://github.com/apache/incubator-mxnet/issues/12662
 
 
   ## Description
   I have noticed, that if I use MXNET_CUDNN_AUTOTUNE_DEFAULT=1 with big image 
dimensions (1900x1900), then after a forward pass a lot of GPU memory got 
consumed and never released. Autotune gets Out of Memory exception, if I try to 
pass another image after the first one with also big, but different dimensions 
(1800x1800). 
   
   The image dimensions are smaller, so my assumption is that since 1900x1900 
got processed then 1800x1800 should also be processed, because it takes less 
memory. But it is actually not the case, because after the first image 
processing some of the GPU memory is not released.
   
   The main question for me is why GPU memory is not released once the first 
image is processed? It seems like something is holding it. I think there is a 
memory leak or some sort of cache, which is never released.
   
   ## Environment info (Required)
   
   ```
   ----------Python Info----------
   Version      : 3.6.4
   Compiler     : GCC 7.2.0
   Build        : ('default', 'Jan 16 2018 18:10:19')
   Arch         : ('64bit', '')
   ------------Pip Info-----------
   Version      : 9.0.1
   Directory    : /home/ubuntu/anaconda3/lib/python3.6/site-packages/pip
   ----------MXNet Info-----------
   /home/ubuntu/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: 
FutureWarning: Conversion of the second argument of issubdtype from `float` to 
`np.floating` is deprec$
   ted. In future, it will be treated as `np.float64 == np.dtype(float).type`.
     from ._conv import register_converters as _register_converters
   Version      : 1.3.0
   Directory    : /home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet
   Commit Hash   : b3be92f4a48bce62a5a8424271871c2f81c8f7f1
   ----------System Info----------
   Platform     : Linux-4.4.0-1066-aws-x86_64-with-debian-stretch-sid
   system       : Linux
   node         : ip-172-31-22-61
   release      : 4.4.0-1066-aws
   version      : #76-Ubuntu SMP Thu Aug 16 16:21:21 UTC 2018
   ----------Hardware Info----------
   machine      : x86_64
   processor    : x86_64
   Architecture:          x86_64
   CPU op-mode(s):        32-bit, 64-bit
   Byte Order:            Little Endian
   CPU(s):                32
   On-line CPU(s) list:   0-31
   Thread(s) per core:    2
   Core(s) per socket:    16
   Socket(s):             1
   NUMA node(s):          1
   Vendor ID:             GenuineIntel
   CPU family:            6
   Model:                 79
   Model name:            Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
   Stepping:              1
   CPU MHz:               2699.984
   CPU max MHz:           3000.0000
   CPU min MHz:           1200.0000
   BogoMIPS:              4600.07
   Hypervisor vendor:     Xen
   Virtualization type:   full
   L1d cache:             32K
   L1i cache:             32K
   L2 cache:              256K
   L3 cache:              46080K
   NUMA node0 CPU(s):     0-31
   Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm 
constant_tsc rep_good
   nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq ssse3 fma cx16 pcid 
sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand 
hypervisor lahf_lm abm 3
   dnowprefetch invpcid_single kaiser fsgsbase bmi1 hle avx2 smep bmi2 erms 
invpcid rtm rdseed adx xsaveopt
   ----------Network Test----------
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0024 
sec, LOAD: 0.5097 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0004 sec, LOAD: 
0.3562 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0003 sec, LOAD: 
0.3587 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.0003 sec, LOAD: 0.1460 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0023 sec, LOAD: 
0.0745 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0003 sec, 
LOAD: 0.0240 sec.
   ```
   
   Package used (Python/R/Scala/Julia):
   Python
   
   ## Error Message:
   ```
   Traceback (most recent call last):
     File "main.py", line 71, in <module>
       print(transform_fn(net, args.b, args.h, args.w))
     File "main.py", line 54, in transform_fn
       data_out = net(data_in).asnumpy()
     File 
"/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py", 
line 1972, in asnumpy
       ctypes.c_size_t(data.size)))
     File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/base.py", 
line 252, in check_call
       raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: [02:19:57] 
src/operator/nn/./cudnn/cudnn_convolution-inl.h:870: Failed to find any forward 
convolution algorithm.  with workspace size of 1073741824 bytes, please 
consider reducing batch/model size or increasing the workspace size
   
   Stack trace returned 10 entries:
   [bt] (0) 
/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x36161a) 
[0x7f3ddb36761a]
   ...
   ```
   ## Minimum reproducible example
   ```
   import os
   import argparse
   
   #os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = "0"
   
   from mxnet import nd, gluon
   import mxnet as mx
   from mxnet.gluon import nn
   import json
   
   ctx = mx.gpu(0)
   
   def create_model():
       net = gluon.nn.HybridSequential()
       with net.name_scope():
           net.add(nn.Conv2D(64, 5))
           net.add(nn.LeakyReLU(0.1))
           net.add(nn.Conv2D(64, 3))
           net.add(nn.LeakyReLU(0.1))
           net.add(nn.Conv2D(96, 3))
           net.add(nn.LeakyReLU(0.1))
           net.add(nn.Conv2D(96, 3))
           net.add(nn.LeakyReLU(0.1))
           net.add(nn.Conv2D(128, 3))
           net.add(nn.LeakyReLU(0.1))
           net.add(nn.Conv2D(128, 3))
           net.add(nn.LeakyReLU(0.1))
           net.add(nn.Conv2D(256, 3))
           net.add(nn.LeakyReLU(0.1))
           net.add(nn.Conv2D(128, 3))
           net.add(nn.LeakyReLU(0.1))
           net.add(nn.Conv2D(1, 3))
       return net
   
   
   def model_fn():
       net = create_model()
       net.hybridize()
       net.initialize(mx.init.Normal(sigma=0.01), ctx=ctx)
       return net
   
   
   def transform_fn(net, batch_size=1, height=500, width=500):
       data_in = nd.random_uniform(low=0, high=255, shape=(batch_size, 3, 
height, width), ctx=ctx, dtype="float32")
   
       data_out = net(data_in).asnumpy()
       return data_out
   
   parser = argparse.ArgumentParser(description='Memory consumption checker')
   parser.add_argument('--h', type=int, default=500, help='Height of an image, 
default 500')
   parser.add_argument('--w', type=int, default=500, help='Weight of an image, 
default 500')
   parser.add_argument('--b', type=int, default=1, help='Batch_size, default 1')
   args = parser.parse_args()
   print(args)
   
   net = model_fn()
   mx.nd.waitall()
   
   while True:
           args.h = int(input("Height: "))
           args.w = int(input("Width: "))
           print(transform_fn(net, args.b, args.h, args.w))
   ```
   
   ## Steps to reproduce
   
   1. Run script on p2 instance 
   2. On first prompt of image dimensions provide 1900 and 1900
   3. See amount of GPU memory in use once the forward pass is done and second 
prompt is displayed (in my case it was 5508 out of 11441)
   4. Enter image dimensions of 1800 and 1800
   5. See out of memory error
   
   ## What have you tried to solve it?
   Setting MXNET_CUDNN_AUTOTUNE_DEFAULT=0 seems to solve the problem. The 
inference time increases slightly, but memory seems like properly reused:
   1. After the first image is done (1900x1900), the consumption is almost the 
same: 5505/11441
   2. After the second image is done (1800x1800), the consumption gets to 
10222/11441, but processing doesn't fail
   3. I can even run 3rd image (1700x1700) and it gets processed fine and 
consumption drops to 4451/11441.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to