@jeremiedb Vgg16 would require more than 12GB of GPU memory size and vgg19 
requires more than 15GB with the official MxnetR distribution. This numbers are 
the GPU memory footprint at the time of crash. However, I tried the gc() fix 
which you mentioned above and the transfer learning do work fine with GPU 
memory footprint constant at 6GB at batch size of 150. I think it makes sense 
to add the gc() fix in model.R to avoid these crashes. Do you have any better 
suggestions?

[ Full content available at: 
https://github.com/apache/incubator-mxnet/issues/7968 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to