rondogency commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r411884892
##########
File path: CMakeLists.txt
##########
@@ -726,18 +726,39 @@ endif()
# extension libraries (custom operators, custom subgraphs) are built by default
add_library(customop_lib SHARED
${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/gemm_lib.cc)
+add_library(transposecsr_lib SHARED
${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/transposecsr_lib.cc)
+add_library(transposerowsp_lib SHARED
${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/transposerowsp_lib.cc)
add_library(subgraph_lib SHARED
${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_subgraph/subgraph_lib.cc)
+add_library(pass_lib SHARED
${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_pass/pass_lib.cc)
target_include_directories(customop_lib PUBLIC
${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
+target_include_directories(transposecsr_lib PUBLIC
${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
+target_include_directories(transposerowsp_lib PUBLIC
${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
target_include_directories(subgraph_lib PUBLIC
${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
+target_include_directories(pass_lib PUBLIC
${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
if(USE_CUDA)
add_library(customop_gpu_lib SHARED
${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/relu_lib.cu)
target_include_directories(customop_gpu_lib PUBLIC
${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
endif()
-if(MSVC)
+if(UNIX)
Review comment:
I mean you don't need to add -shared even for customop_gpu_lib, just
change the lines inside MSVC block
##########
File path: include/mxnet/lib_api.h
##########
@@ -22,7 +22,11 @@
* \file lib_api.h
* \brief APIs to interact with libraries
* This API specifies function prototypes to
- * register custom ops for library authors
+ * register custom ops, partitioner, and passes
+ * for library authors
+ * See example/extension/lib_custom_op/README.md
Review comment:
maybe we can rephrase it to "APIs to write extension library, see ...
for registering custom operators, ... for custom partitioners, ... for custom
graph passes"
##########
File path: include/mxnet/lib_api.h
##########
@@ -45,7 +49,7 @@
#endif
/* Make sure to update the version number everytime you make changes */
-#define MX_LIBRARY_VERSION 6
+#define MX_LIBRARY_VERSION 7
Review comment:
I still feel it is better to make it 10
also it is better to add to c_api.cc line 339 version checking message
something like "please update lib_api.h to match the version supported by MXNet
backend"
##########
File path: example/extensions/lib_custom_op/README.md
##########
@@ -22,15 +22,13 @@ C++ Custom Operator Example and Tutorial
Adding new operators in MXNet requires understanding of MXNet backend operator
registration and recompiling of MXNet with all its dependencies. Users can use
the old Python custom operator to add new operators, but it is slow,
complicated and has poor adoption rate. So our approach for adding custom
operators is to enable dynamic loading of C++ custom operators compiled in
external libraries at runtime.
-Custom operators (CustomOp) enable users to write new operators without
compiling against all of MXNet header files and dependencies. When a library
containing custom operators is loaded dynamically, the operators found in the
library will be re-registered in MXNet so that users can call those operators
natively just like other built-in operators.
+Custom operators (CustomOp) enable users to write new operators without
compiling against all of MXNet header files and dependencies. When a library
containing custom operators is loaded dynamically, the operators found in the
library will be registered in MXNet so that users can call those operators
natively just like other built-in operators.
## Getting Started
### Have MXNet Ready
-Custom Operator support was merged (#15921, #17270) and is not available in
versions of MXNet prior to v1.7.0.
-To access the feature now, please install MXNet by compiling from source using
master or using the previously mentioned commits, downloading one of the
nightly builds, or from a release of MXNet 1.7.0+.
-For running the following example, it doesn’t matter if it is a CUDA, MKLDNN
or plain MXNet build; the custom operator doesn’t interact with the execution
of other native MXNet operators.
+To run the following example, the build type of MXNet doesn’t matter since the
custom operator doesn’t interact with the execution of other native MXNet
operators.
Review comment:
it is better to add prerequisite here or run an example like "This
requires GCC > 5 or CUDA > 9 to run the examples"
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]