aaronmarkham commented on a change in pull request #17486: Update CustomOp doc 
with changes for GPU support
URL: https://github.com/apache/incubator-mxnet/pull/17486#discussion_r373216396
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -28,26 +28,39 @@ Custom operators (CustomOp) enable users to write new 
operators without compilin
 
 ### Have MXNet Ready
 
-First you should install MXNet either from compiling from source code or 
download from nightly build. It doesn’t matter if the build comes with CUDA or 
MKLDNN. The custom operator doesn’t interact with the execution of other native 
MXNet operators.
+Custom Operator support was recently merged (#15921, #17270) and is not 
available in a released version of MXNet yet. It will be part of the 
forthcoming 1.7 release, until then please install MXNet by compiling from 
source or downloading one of the nightly builds. For running the example below, 
it doesn’t matter if it is a CUDA, MKLDNN or Vanila build. The custom operator 
doesn’t interact with the execution of other native MXNet operators. Note that 
if you want to run GPU examples and write your custom operators running on GPU, 
you still need MXNet CUDA build.
 
-### Run An Example:
+### Run An Example
 
-You can start getting familiar with custom operators by running some examples 
provided in the **example/extensions/lib_custom_op** directory. Start with a 
common linear algebra operator like `gemm` (Generalized Matrix Multiplication). 
Go to `lib_custom_op` directory and follow these steps:
+You can start getting familiar with custom operators by running some examples 
provided in the `example/extensions/lib_custom_op` directory. Start with a 
common linear algebra operator like `gemm` (Generalized Matrix Multiplication). 
Go to `lib_custom_op` directory and follow these steps:
 
 1. Run `make gemm_lib`. The Makefile will generate a dynamic library 
**libgemm_lib.so** compiled from `gemm_lib.cc`. This is the library you are 
going to load that contains everything for the custom gemm operator.
-2. Run `python test_gemm.py`. It’ll first load the above .so library, find the 
operators, register them in the MXNet backend, print "Found x operators", then 
invoke the operator like a regular MXNet operator and output the result.
+2. Run `python test_gemm.py`. It’ll first load the library compiled from step 
1, find the operators, register them in the MXNet backend, then invoke the 
operator like a regular MXNet operator and output the result.
+```
+[19:22:02] ../src/c_api/c_api.cc:286: Found 2 operators in library
+[19:22:02] ../src/c_api/c_api.cc:350:  Op[0] my_gemm
+[19:22:02] ../src/c_api/c_api.cc:350:  Op[1] state_gemm
+[19:22:02] ../src/c_api/c_api.cc:785: Found 0 partitioners in library
+--------start ndarray compute---------
+[[ 50.]
+ [122.]]
+<NDArray 2x1 @cpu(0)>
+...
+```
+
+Note that you can safely ignore the `Found 0 partitioners` info as it is not 
related to the custom operator.
 
-### Basic Files For Gemm Library:
+### Basic Files For Gemm Library
 
 Review comment:
   ```suggestion
   ### Basic Files For a Gemm Library
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to