ChaiBapchya commented on a change in pull request #17642: [OpPerf] Fixed Python 
profiler bug
URL: https://github.com/apache/incubator-mxnet/pull/17642#discussion_r382350528
 
 

 ##########
 File path: benchmark/opperf/utils/profiler_utils.py
 ##########
 @@ -248,12 +248,11 @@ def python_profile(func):
     @functools.wraps(func)
     def python_profile_it(*args, **kwargs):
         runs = args[1]
-        modified_args = (args[0], 1, args[2])
 
 Review comment:
   So the args that are passed to this function are
   args[0] = op
   args[1] = warmup / runs (number of times to run for warmup or number of 
times to run)
   args[2] - rest of the args
   
   
https://github.com/apache/incubator-mxnet/blob/9dcf71d8fe33f77ed316a95fcffaf1f7f883ff70/benchmark/opperf/utils/benchmark_utils.py#L114
   
   
https://github.com/apache/incubator-mxnet/blob/9dcf71d8fe33f77ed316a95fcffaf1f7f883ff70/benchmark/opperf/utils/benchmark_utils.py#L121
   
   The way it worked for native MXNet CPP profiler is that you could pass the 
runs (and it would capture the time for each value along with mean/max, etc)
   
   But for Python's time it function, we had to manually run the `for loop` for 
the number of runs.
   So that's what I did there
   
   1. we copy the number of runs in a variable in run and then run it that many 
number of times 
   2. For each run, we use python time it function to time it and then take 
average, mean, max, etc values for each of those individual python time runs.
   
   Makes sense? @apeforest 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to