access2rohit edited a comment on issue #17331:
URL: 
https://github.com/apache/incubator-mxnet/issues/17331#issuecomment-656499453


   Currently Large Tensor Support work on all operators implemented in MXNet 
and MKLDNN also supports int64. CUDA kernels written inside MXNET both 
generic(cpu/gpu) and specific(gpu only) support large tensors depending on 
device memory.
   
   BLAS and LAPACK libs were not considered while defining the scope of the 
project. Currently following BLAS and LAPACK implementations are supported 
inside MXNet
   
   openBLAS (Default)
   MKL
   ATLAS
   Apple Accelerate
   
   upon investigation openBLAS needs to be built with specific flag to support 
int64_t signatures and MKL will support long long int signatures(in which case 
reinterpret_cast<>() is needed for casting pointers as int64_t is treated as 
long int* as opposed to long long int* in MKL). Additionally LAPACK and BLAS 
wrappers need to be updated from int -> int64_t.
   
   Initially openBLAS can be supported since it is used by default and in pypi 
wheels as well. Thus not, breaking any default behaviour of customer. Users 
attempting to use Large Tensor with other BLAS and LAPACK implementations won't 
face issues as long as they don't use large tensors. Additional error messages 
will be added in case Large Tensor is used BLAS implementation is not openBLAS 
until that BLAS library is made to work with large tensor support of MXNet.
   
   NOTE: currently openBLAS works correctly with smaller inputs(within range of 
int32) but will truncate parameters passed with higher values and hence will 
result in either SIGSEGV(mostly) or garbage values being found(will eventually 
cause SIGSEGV in a bigger script)
   
   @sandeep-krishnamurthy @leezu @szha @zheng-da 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to