sandeep-krishnamurthy commented on a change in pull request #15226: [Opperf] 
Make module/namespace of the operator parameterized
URL: https://github.com/apache/incubator-mxnet/pull/15226#discussion_r298688417
 
 

 ##########
 File path: benchmark/opperf/nd_operations/binary_operators.py
 ##########
 @@ -38,7 +38,7 @@
     get_all_elemen_wise_binary_operators
 
 
-def run_mx_binary_broadcast_operators_benchmarks(ctx=mx.cpu(), 
dtype='float32', warmup=10, runs=50):
+def run_mx_binary_broadcast_operators_benchmarks(ctx=mx.cpu(), 
dtype='float32', warmup=25, runs=100):
 
 Review comment:
   For now yes, but, I see it to be different. For example - For Convolution 
operator variance is close to 0 within 25 warmup and 50 runs. For simple op 
like log/tan etc., we need to run 100+ runs to reduce variance. So made it 
configurable per operator category

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to