On 11/8/12 7:55 PM, Dag Sverre Seljebotn wrote:
> On 11/08/2012 06:59 PM, Francesc Alted wrote:
>> On 11/8/12 6:38 PM, Dag Sverre Seljebotn wrote:
>>> On 11/08/2012 06:06 PM, Francesc Alted wrote:
On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
> On 11/07/2012 08:41 PM, Neal Becker wrote:
On 11/08/2012 07:55 PM, Dag Sverre Seljebotn wrote:
> On 11/08/2012 06:59 PM, Francesc Alted wrote:
>> On 11/8/12 6:38 PM, Dag Sverre Seljebotn wrote:
>>> On 11/08/2012 06:06 PM, Francesc Alted wrote:
On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
> On 11/07/2012 08:41 PM, Neal Becker wro
On 11/08/2012 06:59 PM, Francesc Alted wrote:
> On 11/8/12 6:38 PM, Dag Sverre Seljebotn wrote:
>> On 11/08/2012 06:06 PM, Francesc Alted wrote:
>>> On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
On 11/07/2012 08:41 PM, Neal Becker wrote:
> Would you expect numexpr without MKL to give a s
On 11/8/12 6:38 PM, Dag Sverre Seljebotn wrote:
> On 11/08/2012 06:06 PM, Francesc Alted wrote:
>> On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
>>> On 11/07/2012 08:41 PM, Neal Becker wrote:
Would you expect numexpr without MKL to give a significant boost?
>>> If you need higher performance
On 11/08/2012 06:06 PM, Francesc Alted wrote:
> On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
>> On 11/07/2012 08:41 PM, Neal Becker wrote:
>>> Would you expect numexpr without MKL to give a significant boost?
>> If you need higher performance than what numexpr can give without using
>> MKL, you
On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
> On 11/07/2012 08:41 PM, Neal Becker wrote:
>> Would you expect numexpr without MKL to give a significant boost?
> If you need higher performance than what numexpr can give without using
> MKL, you could look at code such as this:
>
> https://github.
On Thu, Nov 8, 2012 at 2:22 AM, Francesc Alted wrote:
>> -- It can remove a lot of uneccessary temporary creation.
> Well, the temporaries are still created, but the thing is that, by
> working with small blocks at a time, these temporaries fit in CPU cache,
> preventing copies into main memor
On 11/07/2012 08:41 PM, Neal Becker wrote:
> Would you expect numexpr without MKL to give a significant boost?
If you need higher performance than what numexpr can give without using
MKL, you could look at code such as this:
https://github.com/herumi/fmath/blob/master/fmath.hpp#L480
But that me
On 11/8/12 12:35 AM, Chris Barker wrote:
> On Wed, Nov 7, 2012 at 11:41 AM, Neal Becker wrote:
>> Would you expect numexpr without MKL to give a significant boost?
> It can, depending on the use case:
> -- It can remove a lot of uneccessary temporary creation.
> -- IIUC, it works on blocks of
On 11/7/12 8:41 PM, Neal Becker wrote:
> Would you expect numexpr without MKL to give a significant boost?
Yes. Have a look at how numexpr's own multi-threaded virtual machine
compares with numexpr using VML:
http://code.google.com/p/numexpr/wiki/NumexprVML
As it can be seen, the best results
On Wed, Nov 7, 2012 at 11:41 AM, Neal Becker wrote:
> Would you expect numexpr without MKL to give a significant boost?
It can, depending on the use case:
-- It can remove a lot of uneccessary temporary creation.
-- IIUC, it works on blocks of data at a time, and thus can keep
things in cache m
Would you expect numexpr without MKL to give a significant boost?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 11/07/2012 03:30 PM, Neal Becker wrote:
> David Cournapeau wrote:
>
>> On Wed, Nov 7, 2012 at 1:56 PM, Neal Becker wrote:
>>> David Cournapeau wrote:
>>>
On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker wrote:
> I'm trying to do a bit of benchmarking to see if amd libm/acml will help
David Cournapeau wrote:
> On Wed, Nov 7, 2012 at 1:56 PM, Neal Becker wrote:
>> David Cournapeau wrote:
>>
>>> On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker wrote:
I'm trying to do a bit of benchmarking to see if amd libm/acml will help
me.
I got an idea that instead of buildi
David Cournapeau wrote:
> On Wed, Nov 7, 2012 at 1:56 PM, Neal Becker wrote:
>> David Cournapeau wrote:
>>
>>> On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker wrote:
I'm trying to do a bit of benchmarking to see if amd libm/acml will help
me.
I got an idea that instead of buildi
On Wed, Nov 7, 2012 at 1:56 PM, Neal Becker wrote:
> David Cournapeau wrote:
>
>> On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker wrote:
>>> I'm trying to do a bit of benchmarking to see if amd libm/acml will help me.
>>>
>>> I got an idea that instead of building all of numpy/scipy and all of my
>>
David Cournapeau wrote:
> On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker wrote:
>> I'm trying to do a bit of benchmarking to see if amd libm/acml will help me.
>>
>> I got an idea that instead of building all of numpy/scipy and all of my
>> custom modules against these libraries, I could simply use
On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker wrote:
> I'm trying to do a bit of benchmarking to see if amd libm/acml will help me.
>
> I got an idea that instead of building all of numpy/scipy and all of my custom
> modules against these libraries, I could simply use:
>
> LD_PRELOAD=/opt/amdlibm-3
I'm trying to do a bit of benchmarking to see if amd libm/acml will help me.
I got an idea that instead of building all of numpy/scipy and all of my custom
modules against these libraries, I could simply use:
LD_PRELOAD=/opt/amdlibm-3.0.2/lib/dynamic/libamdlibm.so:/opt/acml5.2.0/gfortran64/lib/l
19 matches
Mail list logo