Re: [julia-users] Re: Strange behavior using "^" (power function)

2015-12-17 Thread Stefan Karpinski
There are several different ways to compute floating-point powers:

   1. libm pow functions
   2. LLVM pow intrinsics
   3. power by squaring

Since these use potentially different algorithms, they can give slightly
different answers. When you do Float64^Float64, the libm pow function is
called. When you do Float64^Int the LLVM intrinsic for integer
exponentiation of floating-point values is used. In your example, we have
the following results:

julia> 1.0525004^6.0
1.3593541801778923

julia> 1.0525004^6
1.3593541801778926

julia> Base.power_by_squaring(1.0525004, 6)
1.3593541801778926


So libm gives one answer while LLVM powi and power_by_squaring give
another. But there are other examples where there is disagreement in
different ways:

julia> 1.0525004^7.0
1.4307202746372374

julia> 1.0525004^7
1.4307202746372372

julia> Base.power_by_squaring(1.0525004, 7)
1.4307202746372374


julia> 1.0525004^33.0
5.411646860374188

julia> 1.0525004^33
5.411646860374188

julia> Base.power_by_squaring(1.0525004, 33)
5.411646860374187


You can even find cases where they all disagree:

julia> 1.0525004^89.0
95.0095587653903

julia> 1.0525004^89
95.00955876539031

julia> Base.power_by_squaring(1.0525004, 89)
95.00955876539028


Although that's fairly rare.

On Thu, Dec 17, 2015 at 6:37 AM, Paulo Jabardo  wrote:

> Checkout base/math.jl
>
> When the exponent is a float, the c library function is used. When the
> exponent is an integer, it uses a llvm function. The implementation for
> integer exponent used to use an intelligent algorithm that does what your
> second example does but in a smart way, reducing the number of operations.
>
> This kind of "error" is unavoidable when using floating point arithmetic
>
>
>
> On Tuesday, December 15, 2015 at 6:38:03 PM UTC-2, Christopher Alexander
> wrote:
>>
>> Just as a follow up, I just tested with the exponent being an Integer vs
>> a Float (i.e. 6 vs 6.0), and I now see the same result as I do with Python
>> and C++ using both.
>>
>> Chris
>>
>> On Tuesday, December 15, 2015 at 3:35:35 PM UTC-5, Christopher Alexander
>> wrote:
>>>
>>> Hello all,
>>>
>>> I have noted an inaccuracy when using the "^" symbol.  Please see below:
>>>
>>> *julia> **1.0525004 ^ 6.0*
>>>
>>>
>>> *1.3593541801778923*
>>> *julia> *
>>> *1.0525004 * 1.0525004 * 1.0525004 *
>>> 1.0525004 * 1.0525004 * 1.0525004*
>>> *1.3593541801778926*
>>>
>>> This yields a difference of:
>>>
>>> *2.220446049250313e-16*
>>>
>>>
>>>
>>> This might not seem like much of a difference, but it is having an
>>> adverse effect on a project I am working on.  To compare, in Python, I
>>> don't see any difference:
>>>
>>> In[8]: 1.0525004 ** 6.0
>>> Out[8]: 1.3593541801778926
>>>
>>> In[9]: 1.0525004 * 1.0525004 * 1.0525004 *
>>> 1.0525004 * 1.0525004 * 1.0525004
>>> Out[9]: 1.3593541801778926
>>>
>>> I also see no difference in C++
>>>
>>>
>>> Any idea why this might be?  Just an issue with floating point math?  Is
>>> Julia actually being more accurate here?
>>>
>>> Thanks!
>>>
>>> Chris
>>>
>>


Re: [julia-users] Re: Strange behavior using "^" (power function)

2015-12-17 Thread Stephan Buchert
In Julia there would be also BigFloat:

julia> x=parse(BigFloat, "1.0525004")
1.0525003994

julia> x^6.0-x^6
0.00


Re: [julia-users] Re: Strange behavior using "^" (power function)

2015-12-17 Thread Christopher Alexander
I looked into using BigFloat, but the overhead majorly slowed everything 
down.  I saw there was a package DoubleDouble that was supposed to do that 
level of precision but without the overhead, but it doesn't seem to have 
been updated and I couldn't get it to work with 0.4.2.  Maybe I could try 
to get it up to speed.

Chris

On Thursday, December 17, 2015 at 5:05:23 PM UTC-5, Stephan Buchert wrote:
>
> In Julia there would be also BigFloat:
>
> julia> x=parse(BigFloat, "1.0525004")
>
> 1.0525003994
>
> julia> x^6.0-x^6
>
> 0.00
>


[julia-users] Re: Strange behavior using "^" (power function)

2015-12-17 Thread Paulo Jabardo
Checkout base/math.jl

When the exponent is a float, the c library function is used. When the 
exponent is an integer, it uses a llvm function. The implementation for 
integer exponent used to use an intelligent algorithm that does what your 
second example does but in a smart way, reducing the number of operations.

This kind of "error" is unavoidable when using floating point arithmetic


On Tuesday, December 15, 2015 at 6:38:03 PM UTC-2, Christopher Alexander 
wrote:
>
> Just as a follow up, I just tested with the exponent being an Integer vs a 
> Float (i.e. 6 vs 6.0), and I now see the same result as I do with Python 
> and C++ using both.
>
> Chris
>
> On Tuesday, December 15, 2015 at 3:35:35 PM UTC-5, Christopher Alexander 
> wrote:
>>
>> Hello all,
>>
>> I have noted an inaccuracy when using the "^" symbol.  Please see below:
>>
>> *julia> **1.0525004 ^ 6.0*
>>
>>
>> *1.3593541801778923*
>> *julia> *
>> *1.0525004 * 1.0525004 * 1.0525004 * 
>> 1.0525004 * 1.0525004 * 1.0525004*
>> *1.3593541801778926*
>>
>> This yields a difference of:  
>>
>> *2.220446049250313e-16*
>>
>>
>>
>> This might not seem like much of a difference, but it is having an 
>> adverse effect on a project I am working on.  To compare, in Python, I 
>> don't see any difference:
>>
>> In[8]: 1.0525004 ** 6.0
>> Out[8]: 1.3593541801778926
>>
>> In[9]: 1.0525004 * 1.0525004 * 1.0525004 * 
>> 1.0525004 * 1.0525004 * 1.0525004
>> Out[9]: 1.3593541801778926
>>
>> I also see no difference in C++
>>
>>
>> Any idea why this might be?  Just an issue with floating point math?  Is 
>> Julia actually being more accurate here?
>>
>> Thanks!
>>
>> Chris
>>
>

Re: [julia-users] Re: Strange behavior using "^" (power function)

2015-12-17 Thread Christopher Alexander
Thanks for explaining!  I can see where the differences are in math.jl.  I 
am trying to build an exponential splines fitting model (based off 
QuantLib's implementation), and my discount function is generating massive 
numbers at the beginning (as I try to minimize the function I am using), 
like on the order of 1e285 and then obviously very small numbers at further 
iterations (which is why even the small differences end up creating an 
issue).

Chris

On Thursday, December 17, 2015 at 12:03:12 PM UTC-5, Stefan Karpinski wrote:
>
> There are several different ways to compute floating-point powers:
>
>1. libm pow functions
>2. LLVM pow intrinsics
>3. power by squaring
>
> Since these use potentially different algorithms, they can give slightly 
> different answers. When you do Float64^Float64, the libm pow function is 
> called. When you do Float64^Int the LLVM intrinsic for integer 
> exponentiation of floating-point values is used. In your example, we have 
> the following results:
>
> julia> 1.0525004^6.0
> 1.3593541801778923
>
> julia> 1.0525004^6
> 1.3593541801778926
>
> julia> Base.power_by_squaring(1.0525004, 6)
> 1.3593541801778926
>
>
> So libm gives one answer while LLVM powi and power_by_squaring give 
> another. But there are other examples where there is disagreement in 
> different ways:
>
> julia> 1.0525004^7.0
> 1.4307202746372374
>
> julia> 1.0525004^7
> 1.4307202746372372
>
> julia> Base.power_by_squaring(1.0525004, 7)
> 1.4307202746372374
>
>
> julia> 1.0525004^33.0
> 5.411646860374188
>
> julia> 1.0525004^33
> 5.411646860374188
>
> julia> Base.power_by_squaring(1.0525004, 33)
> 5.411646860374187
>
>
> You can even find cases where they all disagree:
>
> julia> 1.0525004^89.0
> 95.0095587653903
>
> julia> 1.0525004^89
> 95.00955876539031
>
> julia> Base.power_by_squaring(1.0525004, 89)
> 95.00955876539028
>
>
> Although that's fairly rare.
>
> On Thu, Dec 17, 2015 at 6:37 AM, Paulo Jabardo  > wrote:
>
>> Checkout base/math.jl
>>
>> When the exponent is a float, the c library function is used. When the 
>> exponent is an integer, it uses a llvm function. The implementation for 
>> integer exponent used to use an intelligent algorithm that does what your 
>> second example does but in a smart way, reducing the number of operations.
>>
>> This kind of "error" is unavoidable when using floating point arithmetic
>>
>>
>>
>> On Tuesday, December 15, 2015 at 6:38:03 PM UTC-2, Christopher Alexander 
>> wrote:
>>>
>>> Just as a follow up, I just tested with the exponent being an Integer vs 
>>> a Float (i.e. 6 vs 6.0), and I now see the same result as I do with Python 
>>> and C++ using both.
>>>
>>> Chris
>>>
>>> On Tuesday, December 15, 2015 at 3:35:35 PM UTC-5, Christopher Alexander 
>>> wrote:

 Hello all,

 I have noted an inaccuracy when using the "^" symbol.  Please see below:

 *julia> **1.0525004 ^ 6.0*


 *1.3593541801778923*
 *julia> *
 *1.0525004 * 1.0525004 * 1.0525004 * 
 1.0525004 * 1.0525004 * 1.0525004*
 *1.3593541801778926*

 This yields a difference of:  

 *2.220446049250313e-16*



 This might not seem like much of a difference, but it is having an 
 adverse effect on a project I am working on.  To compare, in Python, I 
 don't see any difference:

 In[8]: 1.0525004 ** 6.0
 Out[8]: 1.3593541801778926

 In[9]: 1.0525004 * 1.0525004 * 1.0525004 * 
 1.0525004 * 1.0525004 * 1.0525004
 Out[9]: 1.3593541801778926

 I also see no difference in C++


 Any idea why this might be?  Just an issue with floating point math?  
 Is Julia actually being more accurate here?

 Thanks!

 Chris

>>>
>

Re: [julia-users] Re: Strange behavior using "^" (power function)

2015-12-17 Thread Jeffrey Sarnoff
Chris,

That package is no longer supported and it is missing division, sqrt ..

Would a double-double type supporting (+),(-),(*),(sqrt) suffice for your 
work?  
I have written that as part of something else, and am willing to separate 
it out for others to use.

Jeffrey


On Thursday, December 17, 2015 at 5:15:33 PM UTC-5, Christopher Alexander 
wrote:
>
> I looked into using BigFloat, but the overhead majorly slowed everything 
> down.  I saw there was a package DoubleDouble that was supposed to do that 
> level of precision but without the overhead, but it doesn't seem to have 
> been updated and I couldn't get it to work with 0.4.2.  Maybe I could try 
> to get it up to speed.
>
> Chris
>
> On Thursday, December 17, 2015 at 5:05:23 PM UTC-5, Stephan Buchert wrote:
>>
>> In Julia there would be also BigFloat:
>>
>> julia> x=parse(BigFloat, "1.0525004")
>>
>> 1.0525003994
>>
>> julia> x^6.0-x^6
>>
>> 0.00
>>
>

Re: [julia-users] Re: Strange behavior using "^" (power function)

2015-12-17 Thread Christopher Alexander
Jeffrey,

Absolutely!  I think I am seeing my C++ QuantLib code reach a minimum that 
is much smaller (roughly 1e-6) vs the Julia implementation (1e-4) because 
of some precision issues.

Thanks!

Chris

On Thursday, December 17, 2015 at 7:34:54 PM UTC-5, Jeffrey Sarnoff wrote:
>
> Chris,
>
> That package is no longer supported and it is missing division, sqrt ..
>
> Would a double-double type supporting (+),(-),(*),(sqrt) suffice for your 
> work?  
> I have written that as part of something else, and am willing to separate 
> it out for others to use.
>
> Jeffrey
>
>
> On Thursday, December 17, 2015 at 5:15:33 PM UTC-5, Christopher Alexander 
> wrote:
>>
>> I looked into using BigFloat, but the overhead majorly slowed everything 
>> down.  I saw there was a package DoubleDouble that was supposed to do that 
>> level of precision but without the overhead, but it doesn't seem to have 
>> been updated and I couldn't get it to work with 0.4.2.  Maybe I could try 
>> to get it up to speed.
>>
>> Chris
>>
>> On Thursday, December 17, 2015 at 5:05:23 PM UTC-5, Stephan Buchert wrote:
>>>
>>> In Julia there would be also BigFloat:
>>>
>>> julia> x=parse(BigFloat, "1.0525004")
>>>
>>> 1.0525003994
>>>
>>> julia> x^6.0-x^6
>>>
>>> 0.00
>>>
>>

[julia-users] Re: Strange behavior using "^" (power function)

2015-12-15 Thread Christopher Alexander
Just as a follow up, I just tested with the exponent being an Integer vs a 
Float (i.e. 6 vs 6.0), and I now see the same result as I do with Python 
and C++ using both.

Chris

On Tuesday, December 15, 2015 at 3:35:35 PM UTC-5, Christopher Alexander 
wrote:
>
> Hello all,
>
> I have noted an inaccuracy when using the "^" symbol.  Please see below:
>
> *julia> **1.0525004 ^ 6.0*
>
>
> *1.3593541801778923*
> *julia> *
> *1.0525004 * 1.0525004 * 1.0525004 * 
> 1.0525004 * 1.0525004 * 1.0525004*
> *1.3593541801778926*
>
> This yields a difference of:  
>
> *2.220446049250313e-16*
>
>
>
> This might not seem like much of a difference, but it is having an adverse 
> effect on a project I am working on.  To compare, in Python, I don't see 
> any difference:
>
> In[8]: 1.0525004 ** 6.0
> Out[8]: 1.3593541801778926
>
> In[9]: 1.0525004 * 1.0525004 * 1.0525004 * 
> 1.0525004 * 1.0525004 * 1.0525004
> Out[9]: 1.3593541801778926
>
> I also see no difference in C++
>
>
> Any idea why this might be?  Just an issue with floating point math?  Is 
> Julia actually being more accurate here?
>
> Thanks!
>
> Chris
>