aaronmarkham commented on a change in pull request #17241: Add CustomOp
tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366110828
##########
File path: example/extensions/lib_custom_op/README.md
##########
@@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or
download from nightly build. It doesn’t matter if the build comes with CUDA or
MKLDNN. The custom operator doesn’t interact with the execution of other native
MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples
we provide in the **example/extensions/lib_custom_op** directory. Let’s start
with gemm (Generalized Matrix Multiplication) operator, a common linear algebra
operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library
**libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going
to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find
operators, register them in the MXNet backend, print "Found x operators"; then
invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of
all required components of a custom operator, as well as the registration of
the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library,
with a header file **include/mxnet/lib_api.h** from MXNet source code.
Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls
`mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom
operator, invoke the operator using both ndarray and symbol API, and print
outputs of forward and backward pass. The outputs should be the same as the
regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom
operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+ * This function specifies number of input and output tensors for the
custom operator; also this is where a custom operator can validate the
attributes (ie. options) specified by the user.
+
+ MXReturnValue parseAttrs(
+ std::map<std::string,
+ std::string> attrs,
+ int* num_in,
+ int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+ * This function specifies how custom operator infers output data types
using input data types.
+
+ MXReturnValue inferType(
+ std::map<std::string, std::string> attrs,
+ std::vector<int> &intypes,
+ std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+ * This function specifies how custom operator infers output tensor shape
using input shape.
+
+ MXReturnValue inferShape(
+ std::map<std::string, std::string> attrs,
+ std::vector<std::vector<unsigned int>> &inshapes,
+ std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+ * This function specifies the computation of forward pass of the operator.
+
+ MXReturnValue forward(
+ std::map<std::string, std::string> attrs,
+ std::vector<MXTensor> inputs,
+ std::vector<MXTensor> outputs,
+ OpResource res)
+
+* [REGISTER_OP(my_op_name) Macro](./gemm_lib.cc#L169):
+ * This macro registers custom operator to all MXNet APIs by its name, and
you need to call setters to bind the above functions to the registered operator.
+
+ REGISTER_OP(my_op_name)
+ .setForward(forward)
+ .setParseAttrs(parseAttrs)
+ .setInferType(inferType)
+ .setInferShape(inferShape);
+
+Also there are some optional functions you can specify:
+
+* [backward](./gemm_lib.cc#L90) - Backward Gradient function:
+ * This function specifies the computation of backward pass of the operator.
Review comment:
```suggestion
* This function specifies the computation of the backward pass of the
operator.
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services