ChaiBapchya commented on pull request #18785:
URL: https://github.com/apache/incubator-mxnet/pull/18785#issuecomment-674978013
@jinboci I saw one of your PRs for fixing TVM Op errors.. Any idea why this
test fails when using TVM=ON?
It's failing for3 tests: Python3 GPU, Python3 MKLDNN GPU, Python3
MKLDNN-NoCUDNN GPU
Common Stack Trace
```
test_operator_gpu.test_kernel_error_checking ... terminate called after
throwing an instance of 'dmlc::Error'
[2020-08-17T05:59:15.843Z] what(): [05:59:13]
/work/mxnet/3rdparty/tvm/src/runtime/workspace_pool.cc:115: Check failed:
allocated_.size() == 1 (3 vs. 1) :
```
In CI Jenkins_steps.groovy for **Python3 GPU**
We're packing
```
compile_unix_full_gpu()
utils.pack_lib('gpu', mx_lib_cpp_examples)
```
where
```
mx_lib_cpp_examples = 'lib/libmxnet.so, lib/libmxnet.a,
lib/libtvm_runtime.so, lib/libtvmop.so, lib/tvmop.conf,
build/libcustomop_lib.so, build/libcustomop_gpu_lib.so,
build/libsubgraph_lib.so, 3rdparty/dmlc-core/libdmlc.a,
3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a,
deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, build/cpp-package/example/*,
python/mxnet/_cy3/*.so, python/mxnet/_ffi/_cy3/*.so'
```
While unpacking
```
test_unix_python3_gpu()
utils.unpack_and_init('gpu', mx_lib_cython)
```
where mx_lib_cython is a subset of mx_lib_cpp_examples
```
mx_lib_cython = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so,
lib/libtvmop.so, lib/tvmop.conf, build/libcustomop_lib.so,
build/libcustomop_gpu_lib.so, build/libsubgraph_lib.so,
3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a,
python/mxnet/_cy3/*.so, python/mxnet/_ffi/_cy3/*.so'
```
Based on the stacktrace: It's throwing TVM runtime check failed for
allocated size
@DickJC123 I see you had submitted this test. Any idea why this is troubling
TVM?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]