akarbown commented on pull request #20474:
URL: https://github.com/apache/incubator-mxnet/pull/20474#issuecomment-905286788


   > > > Hi @akarbown It looks really good. Thanks! I have a question.
   > > > Basically, there are two ways: either we'll use MKL_THREADING_GNU or 
MKL_THREADING_INTEL. Could you please tell me whether there are differences in 
performance results, or, there is no impact on that?
   > > 
   > > 
   > > @mozga-intel, this boils down to comparison between ICX and GCC 
compilers. The whole point of this PR is mainly to enable using dynamic linking 
of MKL libraries without hang or different OpenMP symbol conflicts. Thus 
performance of this 2 compilers is not really an issue here. Unit tests for 
both compilers run more or less in the same time (on my local machine). I've 
observed 10 more tests failures for icx than for gcc 
(unittest_ubuntu_python3_cpu), but it seems to be because of the fact that it 
wasn't tested before and there were problem with compiling MxNET with icc at 
all.
   > 
   > Thanks, Anna! I have one more question: it might not exactly aim at an 
issue like this. Please have a look at the scenario like this: If I have huge 
matrices then I can use one of both modes: either sparse or dense BLAS. What 
would I get, If I tried to call the IIL64 interface for huge matrices on a 
32-bit architecture? Will I get an error message or will everything be okay? 
(...) 32-bits architecture doesn't support the ILP64 interface, and LP64 works 
only for a small tensor.
   
   @mozga-intel Thanks for the questions!
   MxNET MKL ILP64 (USE_INT64_TENSOR_SIZE) is a cmake dependent option and 
depends on the architecture. Thus, if you do not force that option to 'ON' it 
for sure on 32-bit architecture is 'OFF' (please look at CMakeLists.txt l.313). 
FindBLAS.cmake for 32-bit architecture sets BLA_VENDOR to 'Intel10_32' and 
depending on the SDL option links the following libraries:
   1. SDL=1: ```Found BLAS: 
/opt/intel/oneapi/lib/ia32/libmkl_rt.so;-lpthread;-lm;-ldl```
   2. SDL=0: ```Found BLAS: 
-Wl,--start-group;/opt/intel/oneapi/lib/ia32/libmkl_intel.so;/opt/intel/oneapi/lib/ia32/libmkl_intel_thread.so;/opt/intel/oneapi/lib/ia32/libmkl_core.so;-Wl,--end-group;/opt/intel/oneapi/lib/ia32/libiomp5.so;-lpthread;-lm;-ldl```
   
   Those libraries are linked independently of the following options: 
BLA_VENDOR or USE_INT64_TENSOR_SIZE. The error won't appear as the architecture 
and the libraries are chosen correctly (as far as I've checked). 
   I've asked you on the IM for the command line you want me to run and it 
would be really nice to get it. However, I don't see 32-bit testing and 
building scenario in the MxNET but I might have overlooked it - could you 
please provide me also which configuration: OS/docker you thought about?
   
   I've checked that MxNET has some building issues on the 32-bit architecture 
thus I do not think that spending more time on that issue is in the scope of 
this PR. 
   
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to