yzhliu commented on a change in pull request #18526:
URL: https://github.com/apache/incubator-mxnet/pull/18526#discussion_r439188172



##########
File path: src/c_api/c_api.cc
##########
@@ -1363,7 +1363,15 @@ int MXGetVersion(int *out) {
 #if MXNET_USE_TVM_OP
 int MXLoadTVMOp(const char *libpath) {
   API_BEGIN();
-  tvm::runtime::TVMOpModule::Get()->Load(libpath);
+  tvm::runtime::TVMOpModule *libpath_module =  
tvm::runtime::TVMOpModule::Get();

Review comment:
       ```suggestion
     tvm::runtime::TVMOpModule *global_module =  
tvm::runtime::TVMOpModule::Get();
   ```

##########
File path: contrib/tvmop/compile.py
##########
@@ -152,6 +152,12 @@ def get_cuda_arch(arch):
     # we create libtvmop.o first, which gives us chance to link tvm_runtime 
together with the libtvmop
     # to allow mxnet find external helper functions in libtvm_runtime
     func_binary.save(arguments.target_path + "/libtvmop.o")
+    try:
+        func_binary.imported_modules
+    except NameError:
+        func_binary.imported_modules = []

Review comment:
       from 
https://github.com/apache/incubator-tvm/blob/master/python/tvm/runtime/module.py#L136
 we can see `func_binary.imported_modules` should always exist.

##########
File path: src/c_api/c_api.cc
##########
@@ -1363,7 +1363,15 @@ int MXGetVersion(int *out) {
 #if MXNET_USE_TVM_OP
 int MXLoadTVMOp(const char *libpath) {
   API_BEGIN();
-  tvm::runtime::TVMOpModule::Get()->Load(libpath);
+  tvm::runtime::TVMOpModule *libpath_module =  
tvm::runtime::TVMOpModule::Get();
+  libpath_module->Load(libpath);
+#if MXNET_USE_CUDA
+  std::string libpathstr(libpath);
+  std::string cubinpath = libpathstr.substr(0, libpathstr.size() - 11) + 
"libtvmop.cubin";

Review comment:
       would be better to pass `libpath` as dir, and do libpath + "libtvmop.so" 
as well to keep consistency.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to