mozga-intel edited a comment on pull request #20474:
URL: https://github.com/apache/incubator-mxnet/pull/20474#issuecomment-904390250


   > > Hi @akarbown It looks really good. Thanks! I have a question.
   > > Basically, there are two ways: either we'll use MKL_THREADING_GNU or 
MKL_THREADING_INTEL. Could you please tell me whether there are differences in 
performance results, or, there is no impact on that?
   > 
   > @mozga-intel, this boils down to comparison between ICX and GCC compilers. 
The whole point of this PR is mainly to enable using dynamic linking of MKL 
libraries without hang or different OpenMP symbol conflicts. Thus performance 
of this 2 compilers is not really an issue here. Unit tests for both compilers 
run more or less in the same time (on my local machine). I've observed 10 more 
tests failures for icx than for gcc (unittest_ubuntu_python3_cpu), but it seems 
to be because of the fact that it wasn't tested before and there were problem 
with compiling MxNET with icc at all.
   
   Thanks, Anna! I have one more question: it might not exactly aim at an issue 
like this. Please have a look at the scenario like this: If I have huge 
matrices then I can use one of both modes: either sparse or dense BLAS. What 
would I get, If I tried to call the IIL64 interface for huge matrices on a 
32-bit architecture? Will I get an error message or will everything be okay? 
(...) 32-bits architecture doesn't support the ILP64 interface, and LP64 works 
only for a small tensor. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to