juda opened a new pull request, #12232:
URL: https://github.com/apache/tvm/pull/12232
This PR solves two issues:
1. The compatibility of libstdc++ CXX11 ABI.
Currently, the official PyTorch distribution uses old symbols from
libstdc++, which conflicts with the symbols used by TVM. The issue was
discussed
[here](https://discuss.tvm.apache.org/t/can-someone-please-give-me-the-steps-to-use-pt-tvmdsoop/12525)
before.
We address this issue by compiling the code snippets involving TVM and
Torch separately, with their right libstdc++ CXX ABI. The TVM-related codes
([RuntimeModuleWrapperTVM.cc](https://github.com/juda/tvm/pull/4/files#diff-3cb0a7daf9d8032f468f8fda43a9f0a2a94f7c7cd8126c9d6f09b4a805c3c2d0))
are built under the new CXX11 ABI, while the Torch-related codes
([RuntimeModuleWrapperTorch.cc](https://github.com/juda/tvm/pull/4/files#diff-7a2704d021b5f88143cedfbc3bb2b24d289c3616112c0c1e1dc61c29f1881e19))
are built under the same CXX11 ABI as the installed PyTorch, and linked
together by a pure C header
([runtime_bridge.h](https://github.com/juda/tvm/pull/4/files#diff-6b8efa6b4cc714cb50d042cccc4c268cc54cc827d832eda1395885a372cd9b12)).
2. The lack of the support of boolean tensor.
Currently, If we tried to use `optimze_torch` with an input of boolean
tensor, it will fail because it’s not supported by DLPack
(https://github.com/dmlc/dlpack/issues/75). We might want to work around it
since some models use boolean tensor.
We address this issue by extending the DLTensor with an extra `is_bool`
field, guiding us to convert NDArray and DLTensor with the correct type. If the
DLTensor is not boolean, the behavior of data transformation is the same as the
previous codes.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]