rahul003 commented on issue #9629: do not save gpu memory during fp16 training
URL: 
https://github.com/apache/incubator-mxnet/issues/9629#issuecomment-382485782
 
 
   @tornadomeet / @315386775 Were you both using such low batch sizes? I 
imagine in such cases, the CUDA overhead might dominate network GPU costs and 
does not provide a fair picture. I've tried using larger batch sizes for the 
same model and did notice decrease in memory used.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to