This is an automated email from the ASF dual-hosted git repository.

skm pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
     new cc6c649  [OpPerf] Add example of using opperf with internal op locally 
(#18324)
cc6c649 is described below

commit cc6c64909afd78c6b5b63ee1215922e8da589c20
Author: Chaitanya Prakash Bapat <chai.ba...@gmail.com>
AuthorDate: Mon Jun 15 08:55:14 2020 -0700

    [OpPerf] Add example of using opperf with internal op locally (#18324)
    
    * add example of using opperf with internal op locally
    
    * split diff to old and new code for readability
    
    * mx.nd.copyto doesnt exist & website title shows ndarray instead of symbol
    
    * Revert "mx.nd.copyto doesnt exist & website title shows ndarray instead 
of symbol"
    
    This reverts commit 118b0900a58586aca84ec5c853d00cf687615853.
---
 benchmark/opperf/README.md                         | 40 ++++++++++++++++++++++
 .../src/pages/api/r/docs/tutorials/symbol.md       |  2 +-
 2 files changed, 41 insertions(+), 1 deletion(-)

diff --git a/benchmark/opperf/README.md b/benchmark/opperf/README.md
index 757ddc1..4935ea7 100644
--- a/benchmark/opperf/README.md
+++ b/benchmark/opperf/README.md
@@ -167,6 +167,45 @@ Output for the above benchmark run, on a CPU machine, 
would look something like
              ]}
 
 ```
+
+## Usecase 5 - Profile internal operators locally
+Currently, opperf supports operators in `mx.nd.*` namespace.
+However, locally, one can profile internal operators in `mx.nd.internal.*` 
namespace.
+
+#### Changes
+Remove the hasattr check for `op.__name__` to be in `mx.nd`
+
+The resulting diff would look like :
+##### Old Code
+```
+-        if hasattr(mx.nd, op.__name__):
+-            benchmark_result = _run_nd_operator_performance_test(op, inputs, 
run_backward, warmup, runs, kwargs_list, profiler)
+-        else:
+-            raise ValueError("Unknown NDArray operator provided to benchmark. 
-  ", op.__name__)
+```
+##### New Code
+```
++        #if hasattr(mx.nd, op.__name__):
++        benchmark_result = _run_nd_operator_performance_test(op, inputs, 
run_backward, warmup, runs, kwargs_list, profiler)
++        #else:
++            #raise ValueError("Unknown NDArray operator provided to 
benchmark. -  ", op.__name__)
+```
+
+#### Result
+This should allow profiling of any operator in MXNet provided user provides 
valid parameters [`inputs`, `run_backward`, etc] to the `run_performance_test` 
function.
+
+#### Example
+Provided the source code change is made in the 
`benchmark/opperf/utils/benchmark_utils.py`
+```
+>>> import mxnet as mx
+>>> from mxnet import nd
+>>> from benchmark.opperf.utils.benchmark_utils import run_performance_test
+>>> 
run_performance_test(mx.nd._internal._copyto,inputs=[{"data":mx.nd.array([1,2]),"out":mx.nd.empty(shape=mx.nd.array([1,2]).shape,ctx=mx.cpu())}])
+INFO:root:Begin Benchmark - _copyto
+INFO:root:Complete Benchmark - _copyto
+[{'_copyto': [{'inputs': {'data': '<NDArray 2 @cpu(0)>', 'out': '<NDArray 2 
@cpu(0)>'}, 'max_storage_mem_alloc_cpu/0': 0.004}]}]
+```
+
 # How does it work under the hood?
 
 Under the hood, executes NDArray operator using randomly generated data. Use 
MXNet profiler to get summary of the operator execution:
@@ -197,6 +236,7 @@ add_res = run_performance_test([nd.add, nd.subtract], 
run_backward=True, dtype='
 ```
 By default, MXNet profiler is used as the profiler engine.
 
+
 # TODO
 
 All contributions are welcome. Below is the list of desired features:
diff --git a/docs/static_site/src/pages/api/r/docs/tutorials/symbol.md 
b/docs/static_site/src/pages/api/r/docs/tutorials/symbol.md
index a02cc70..a32a911 100644
--- a/docs/static_site/src/pages/api/r/docs/tutorials/symbol.md
+++ b/docs/static_site/src/pages/api/r/docs/tutorials/symbol.md
@@ -1,6 +1,6 @@
 ---
 layout: page_api
-title: Symbol
+title: NDArray
 is_tutorial: true
 tag: r
 permalink: /api/r/docs/tutorials/symbol

Reply via email to