leezu edited a comment on issue #18855: URL: https://github.com/apache/incubator-mxnet/issues/18855#issuecomment-670250269
> There are ~50 places in a handful of different files that all need to be changed. This makes me think if this change is too heavy Why not follow the approach in numpy and define a macro at a central place? https://github.com/numpy/numpy/pull/15069/files#diff-4538717a0246e7d9363e76a2e3fc835e > finding 64 openblas is not well supported You can edit https://github.com/apache/incubator-mxnet/blob/master/cmake/Modules/FindOpenBLAS.cmake. You will need to edit this file in any case, even if you chose not to rely on symbol suffixes. That's because ILP64 openblas would typically be named `libopenblas64` (or `libopenblas64_` with suffixes). The mid-term approach is work with upstream so that we can eventually delete `FindOpenBLAS.cmake` and just rely on upstream cmake feature. > For our purpose of supporting large tensors in 2.0, if we could link 64 openblas statically for our soon-to-come release, I would still think that’s the best solution. There are two different cases: One for the staticbuild for pip, where static linkage will be preferred. For the general CMakeLists.txt, why should we restrict our users to static linkage? > Also 1. openblas is used in TVM, so they will need to make the same change to be consistent with us How do you ensure this consistency without symbol suffix? Does TVM support ILP64? If TVM expects standard 32bit blas but you link ILP64 blas with the same symbol names, wouldn't there be issues? > BTW int 32 blas will work for tensors with size > INT_MAX (2**31 - 1 ), it's when a dim is > INT_MAX we must use int 64 blas, because in the function declarations they use int 32 for stride If `dim is > INT_MAX` is supported by MXNet, our BLAS operators need to either return the correct result or raise an error. @access2rohit told me that his PR making large tensor the default would just silently return wrong result in this case. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
