cjolivier01 commented on a change in pull request #7795: Fix build-server test 
running out of GPU memory (CUDNN allocation failure)
URL: https://github.com/apache/incubator-mxnet/pull/7795#discussion_r137665636
 
 

 ##########
 File path: tools/caffe_converter/test_converter.py
 ##########
 @@ -89,18 +89,20 @@ def main():
     args = parser.parse_args()
     if args.cpu:
         gpus = [-1]
-        batch_size = 32
+        default_batch_size = 32
     else:
         gpus = mx.test_utils.list_gpus()
         assert gpus, 'At least one GPU is needed to run test_converter in GPU 
mode'
-        batch_size = 32 * len(gpus)
+        default_batch_size = 32 * len(gpus)
 
     models = ['bvlc_googlenet', 'vgg-16', 'resnet-50']
 
     val = download_data()
     for m in models:
         test_model_weights_and_outputs(m, args.image_url, gpus[0])
-        test_imagenet_model_performance(m, val, gpus, batch_size)
+        # Build/testing machines tend to be short on GPU memory
+        this_batch_size = default_batch_size / 4 if m == 'vgg-16' else 
default_batch_size
+        test_imagenet_model_performance(m, val, gpus, this_batch_size)
 
 Review comment:
   1) No
   2) Maybe, but this isn't a memory test.  The dataset for this particular 
test is very large.  
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to