Re: [Numpy-discussion] Precision/value change moving from C to Python

2013-11-13 Thread Daπid
On 13 November 2013 02:40, Bart Baker  wrote:

> > That is the order of the machine epsilon for double, that looks like
> roundoff
> > errors to me.
>
>
> I'm trying to my head around this. So does that mean that neither of
> them is "right", that it is just the result of doing the same
> calculation two different ways using different computational libraries?


Essentially, yes.

I am tempted to say that, depending on the compiler flags, the C version
*could* be more accurate, as the compiler can reorganise the operations and
reduce the number of steps. But also, if it is optimised for speed, it
could be using faster and less accurate functions and techniques.

In any case, if that 10^-16 matters to you, I'd say you are either doing
something wrong or using the wrong dtype; and without knowing the
specifics, I would bet on the first one. If you really need that precision,
you would have to use more bits, and make sure your library supports that
dtype. I believe the following proves that (my) numpy.cos can deal with 128
bits without converting it to float.

>>> a = np.array([1.2584568431575694895413875135786543], dtype=np.float128)
>>> np.cos(a)-np.cos(a.astype(np.float64))
array([ 7.2099444e-18], dtype=float128)


The bottom line is, don't believe nor trust the least significant bits of
your floating point numbers.


/David.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Precision/value change moving from C to Python

2013-11-12 Thread Bart Baker
>  The issue is that there are some minor (10^-16) differences in the
>  values when I do the calculation in C vs Python. 
> 
> That is the order of the machine epsilon for double, that looks like roundoff
> errors to me.

Hi Daπid,

Thanks for the reply. That does make sense.

I'm trying to my head around this. So does that mean that neither of
them is "right", that it is just the result of doing the same
calculation two different ways using different computational libraries? 

>  I found similar results cythonising some code, everything was the same until 
> I
> changed some numpy functions for libc functions (exp, sin, cos...). After some
> operations in float32, the results were apart for 1 in 10^-5 (the epsilon is
> 10^-6). I blame them on specific implementation differences between numpy's 
> and
> my system's libc specific functions.
>
> To check equality, use np.allclose, it lets you define the relative and
> absolute error.

OK, thanks. I'll use this depending on your answer to my above question.

-Bart
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Precision/value change moving from C to Python

2013-11-12 Thread Daπid
On 12 November 2013 12:01, Bart Baker  wrote:

> The issue is that there are some minor (10^-16) differences in the
> values when I do the calculation in C vs Python.
>

That is the order of the machine epsilon for double, that looks like
roundoff errors to me.

 I found similar results cythonising some code, everything was the same
until I changed some numpy functions for libc functions (exp, sin, cos...).
After some operations in float32, the results were apart for 1 in 10^-5
(the epsilon is 10^-6). I blame them on specific implementation differences
between numpy's and my system's libc specific functions.

To check equality, use np.allclose, it lets you define the relative and
absolute error.


/David.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Precision/value change moving from C to Python

2013-11-12 Thread Bart Baker
Hi all,

First time posting.

I've been working on a class for solving a group of affine models and
decided to drop down to C for a portion of the code for improved
performance.

There are two questions that I've had come up from this.

1) I've written the section of code, which involved a series of fully
populated masked arrays as arguments. There are a few array operations
such as dot products and differences that are taken on the arrays to
generate the final value. I've written versions in both Python/NumPy and
C and now have a verification issue.

The issue is that there are some minor (10^-16) differences in the
values when I do the calculation in C vs Python. The way that I test
these differences is by passing the NumPy arrays to the C extension and
testing for equality in gdb. I'm not relying on any of the NumPy API for
the array operations such as dot product, addition, and subtraction. All
of the arrays I'm treating as two-dimensional in C right now. Once
I figure out this bug, I'm planning on rewriting the computations in
terms of one dimensional arrays.

All of the values that I'm dealing with in C right now are double
precision.

2) Beyond this issue, I also noticed that the values that are equal in
gdb are not eqaul once I return to Python! I pass the same arrays back
from C to Python, test eqaulity by:

npy_array == c_array

and find that almost the entire arrays are non-equal, even though
equality tests in gdb had been true. Is there some obvious reason why
this would be. Is there some sort of implicit value conversion performed
when moving from C back to Python?

I can post some code if need be, but I wanted to get anyone's first
insight if there is anything obvious that I'm not doing right.

I appreciate your help.

Thanks,
Bart
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion