>
> But I saw a discussion about using Intel's MKL for greater performance and
> the Make.user options to use Intel compilers are meant to be supported by
> Julia. Why if there is no advantage in using them?


Intel MKL only helps with faster linear algebra than the default OpenBLAS
(in some cases).  Not with runtime of pure-Julia code.

On Tue, Apr 28, 2015 at 2:31 PM, Ángel de Vicente <
[email protected]> wrote:

> Hi,
>
> On Tuesday, April 28, 2015 at 3:36:48 PM UTC+1, Tim Holy wrote:
>>
>> Intel compilers won't help, because your julia code is being compiled by
>> LLVM.
>>
>
> But I saw a discussion about using Intel's MKL for greater performance and
> the Make.user options to use Intel compilers are meant to be supported by
> Julia. Why if there is no advantage in using them?
>
>
>> It's still hard to tell what's up from what you've shown us. When you run
>> @time, does it allocate any memory? (You still have global variables in
>> there,
>> but maybe you made them const.)
>>
>
> I'm posting some numbers again in reply to Yuuki's mail.
>
>
>> But you can save yourself two iterations through the arrays (i.e., more
>> cache
>> misses) by putting
>>     T[i-1,j-1,k-1] += RHS[i-1,j-1,k-1]
>> inside the first loop and discarding the second loop (except for cleaning
>> up
>> the edges). Fortran may be doing this automatically for you?
>> http://en.wikipedia.org/wiki/Polytope_model
>>
>>
> I'm not sure if Fortran is doing that, but I certainly would not like to
> implement those sort of low-level details in the code itself, since it
> makes understanding the code quite more cumbersome...
>
> (But Yuuki's mail gave me the trick. I reply to his mail below)
>
> Thanks a lot (starting to get the feel for Julia...),
> Ángel de Vicente
>

Reply via email to