ThomasDelteil commented on issue #11071: GPU memory usage for VGG16 prediction
URL: 
https://github.com/apache/incubator-mxnet/issues/11071#issuecomment-402814070
 
 
   @rachelmint do you have an update?
   @eric-haibin-lin I see much higher memory allocation than reported here 
using the VGG16 model. (Though the parameter file is indeed ~500MB) How do we 
explain this memory usage? Do we pre-allocate memory for feature maps?
   
   Using gluon I find that MXNet allocates 1.92GB in memory when loading the 
model.
   When running one image through the network, memory peak at 2.10GB and then 
go down to 2.02GB
   
   Using the Module API, it allocates 1.4GB
   
   ```python
   import mxnet as mx
   from mxnet import gluon
   
   # load model from model zoo
   net = gluon.model_zoo.vision.vgg16(pretrained=True, ctx=mx.gpu()) #1.9GB
   net(mx.nd.ones((1,3,224,224), mx.gpu())) #2.1GB
   
   # export the parameters
   net.hybridize()
   net(mx.nd.ones((1,3,224,224), mx.gpu()))
   net.export('vgg16')
   
   # Load in symbol
   sym, arg_params, aux_params = mx.model.load_checkpoint('vgg16', 0)
   mod = mx.mod.Module(symbol=sym, context=mx.gpu(0), label_names=None)
   mod.bind(for_training=False, data_shapes=[('data', (1,3,224,224))])
   mod.set_params(arg_params, aux_params) # 1.4 GB
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to