@ankkhedia I'm still having some difficulty to wrap my head around this. Memory 
footprint remains constant on my run even on large model (resnet-50 / 101). 

Footprint of 6GB on vgg19 and batch size of 150 appears quite low. Have you 
fixed the parameters of all but the last layer? I can fine-tune with vgg19 with 
parameters fixed and batch size of 8. Memory rise at 6GB at model inception 
then drops to 3GB and remains constant, so I have cannot reproduce how gc() in 
the training loop would help. 

I don't have a better solution for now, but given than adding a gc in impairs 
training speed I would be reluctant to add it at this point given the it seems 
very specific to vgg.  

[ Full content available at: 
https://github.com/apache/incubator-mxnet/issues/7968 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to