On Do, 2015-12-17 at 13:43 +, Nico Schlömer wrote:
> Hi everyone,
>
>
> I noticed a funny behavior in numpy's array_equal. The two arrays
> ```
> a1 = numpy.array(
> [3.14159265358979320],
> dtype=numpy.float64
> )
> a2 = numpy.array(
> [3.14159265358979329],
>
Hi everyone,
I noticed a funny behavior in numpy's array_equal. The two arrays
```
a1 = numpy.array(
[3.14159265358979320],
dtype=numpy.float64
)
a2 = numpy.array(
[3.14159265358979329],
dtype=numpy.float64
)
```
(differing the in the 18th overall digit) are reported equal
On 17 December 2015 at 14:43, Nico Schlömer
wrote:
> I'm not sure where I'm going wrong here. Any hints?
You are dancing around the boundary between close floating point numbers,
and when you are dealing with ULPs, number of decimal places is a bad
measure. Working
On Thu, Dec 17, 2015 at 5:52 AM, Sturla Molden
wrote:
> On 17/12/15 12:06, Francesc Alted wrote:
>
> Pretty good. I did not know that OpenBLAS was so close in performance
>> to MKL.
>>
>
> MKL, OpenBLAS and Accelerate are very close in performance, except for
> level-1
Would it make sense to at all to bring that optimization to np.sum()? I
know that I have np.sum() all over the place instead of count_nonzero,
partly because it is a MatLab-ism and partly because it is easier to write.
I had no clue that there was a performance difference.
Cheers!
Ben Root
On
I believe this line is the reason:
https://github.com/numpy/numpy/blob/c0e48cfbbdef9cca954b0c4edd0052e1ec8a30aa/numpy/core/src/multiarray/item_selection.c#L2110
On Thu, Dec 17, 2015 at 11:52 AM, Raghav R V wrote:
> I was just playing with `count_nonzero` and found it to be
Thanks everyone for helping me glimpse the secret world of FORTRAN
compilers. I am running a Linux machine, so I will look into MKL and
openBLAS. It was easy for me to get a Intel parallel studio XE license
as a student, so I have options.
___
On Thu, Dec 17, 2015 at 7:37 PM, CJ Carey wrote:
> I believe this line is the reason:
>
> https://github.com/numpy/numpy/blob/c0e48cfbbdef9cca954b0c4edd0052e1ec8a30aa/numpy/core/src/multiarray/item_selection.c#L2110
>
The magic actually happens in
Thanks a lot everyone!
I am time and again amazed by how optimized numpy is! Hats off to you guys!
R
On Thu, Dec 17, 2015 at 11:02 PM, Jaime Fernández del Río <
jaime.f...@gmail.com> wrote:
> On Thu, Dec 17, 2015 at 7:37 PM, CJ Carey
> wrote:
>
>> I believe this
On 16 December 2015 at 18:59, Francesc Alted wrote:
> Probably MATLAB is shipping with Intel MKL enabled, which probably is the
> fastest LAPACK implementation out there. NumPy supports linking with MKL,
> and actually Anaconda does that by default, so switching to Anaconda
2015-12-17 12:00 GMT+01:00 Daπid :
> On 16 December 2015 at 18:59, Francesc Alted wrote:
>
>> Probably MATLAB is shipping with Intel MKL enabled, which probably is the
>> fastest LAPACK implementation out there. NumPy supports linking with MKL,
>> and
On 17/12/15 12:06, Francesc Alted wrote:
Pretty good. I did not know that OpenBLAS was so close in performance
to MKL.
MKL, OpenBLAS and Accelerate are very close in performance, except for
level-1 BLAS where Accelerate and MKL are better than OpenBLAS.
MKL requires the number of threads
On 16/12/15 20:47, Derek Homeier wrote:
Getting around 30 s wall time here on a not so recent 4-core iMac, so that
would seem to fit
(iirc Accelerate should actually largely be using the same machine code as MKL).
Yes, the same kernels, but not the same threadpool. Accelerate uses the
GCD,
> If you have some spare cycles, maybe you can open a pull request to add
> np.isclose to the "See Also" section?
That would be great.
Remember that equality for flits is bit-for but equality ( baring NaN
and inf...).
But you hardly ever actually want to do that with floats.
But probably
14 matches
Mail list logo