connorgoggins commented on a change in pull request #17449: Implemented large 
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r378430775
 
 

 ##########
 File path: benchmark/opperf/nd_operations/gemm_operators.py
 ##########
 @@ -55,33 +57,62 @@ def run_gemm_operators_benchmarks(ctx=mx.cpu(), 
dtype='float32', profiler='nativ
 
     """
     # Benchmark tests for dot and batch_dot operators
-    dot_benchmark_res = run_performance_test(
-        [getattr(MX_OP_MODULE, "dot")], run_backward=True,
-        dtype=dtype, ctx=ctx,
-        inputs=[{"lhs": (1024, 1024),
-                 "rhs": (1024, 1024)},
-                {"lhs": (1000, 10),
-                 "rhs": (1000, 10),
-                 "transpose_b": True},
-                {"lhs": (1000, 1),
-                 "rhs": (100, 1000),
-                 "transpose_a": True,
-                 "transpose_b": True}],
-        warmup=warmup, runs=runs, profiler=profiler)
+    if large_tensor == "on":
 
 Review comment:
   The purpose of this flag wouldn't be for use on user-specified shapes, it 
would be for general category and full suite testing of operator performance on 
input data with dimensions >= 2^32. If the user wanted to test individual 
operators with custom shapes, they would use `run_performance_test()` and add 
their custom data as input - they wouldn't use the flag in that case, as the 
`run_performance_test()` function doesn't take in the `large_tensor` flag as an 
argument.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to