Thanks for explaining!  I can see where the differences are in math.jl.  I 
am trying to build an exponential splines fitting model (based off 
QuantLib's implementation), and my discount function is generating massive 
numbers at the beginning (as I try to minimize the function I am using), 
like on the order of 1e285 and then obviously very small numbers at further 
iterations (which is why even the small differences end up creating an 
issue).

Chris

On Thursday, December 17, 2015 at 12:03:12 PM UTC-5, Stefan Karpinski wrote:
>
> There are several different ways to compute floating-point powers:
>
>    1. libm pow functions
>    2. LLVM pow intrinsics
>    3. power by squaring
>
> Since these use potentially different algorithms, they can give slightly 
> different answers. When you do Float64^Float64, the libm pow function is 
> called. When you do Float64^Int the LLVM intrinsic for integer 
> exponentiation of floating-point values is used. In your example, we have 
> the following results:
>
> julia> 1.052500000000004^6.0
> 1.3593541801778923
>
> julia> 1.052500000000004^6
> 1.3593541801778926
>
> julia> Base.power_by_squaring(1.052500000000004, 6)
> 1.3593541801778926
>
>
> So libm gives one answer while LLVM powi and power_by_squaring give 
> another. But there are other examples where there is disagreement in 
> different ways:
>
> julia> 1.052500000000004^7.0
> 1.4307202746372374
>
> julia> 1.052500000000004^7
> 1.4307202746372372
>
> julia> Base.power_by_squaring(1.052500000000004, 7)
> 1.4307202746372374
>
>
> julia> 1.052500000000004^33.0
> 5.411646860374188
>
> julia> 1.052500000000004^33
> 5.411646860374188
>
> julia> Base.power_by_squaring(1.052500000000004, 33)
> 5.411646860374187
>
>
> You can even find cases where they all disagree:
>
> julia> 1.052500000000004^89.0
> 95.0095587653903
>
> julia> 1.052500000000004^89
> 95.00955876539031
>
> julia> Base.power_by_squaring(1.052500000000004, 89)
> 95.00955876539028
>
>
> Although that's fairly rare.
>
> On Thu, Dec 17, 2015 at 6:37 AM, Paulo Jabardo <[email protected] 
> <javascript:>> wrote:
>
>> Checkout base/math.jl
>>
>> When the exponent is a float, the c library function is used. When the 
>> exponent is an integer, it uses a llvm function. The implementation for 
>> integer exponent used to use an intelligent algorithm that does what your 
>> second example does but in a smart way, reducing the number of operations.
>>
>> This kind of "error" is unavoidable when using floating point arithmetic
>>
>>
>>
>> On Tuesday, December 15, 2015 at 6:38:03 PM UTC-2, Christopher Alexander 
>> wrote:
>>>
>>> Just as a follow up, I just tested with the exponent being an Integer vs 
>>> a Float (i.e. 6 vs 6.0), and I now see the same result as I do with Python 
>>> and C++ using both.
>>>
>>> Chris
>>>
>>> On Tuesday, December 15, 2015 at 3:35:35 PM UTC-5, Christopher Alexander 
>>> wrote:
>>>>
>>>> Hello all,
>>>>
>>>> I have noted an inaccuracy when using the "^" symbol.  Please see below:
>>>>
>>>> *julia> **1.052500000000004 ^ 6.0*
>>>>
>>>>
>>>> *1.3593541801778923*
>>>> *julia> *
>>>> *1.052500000000004 * 1.052500000000004 * 1.052500000000004 * 
>>>> 1.052500000000004 * 1.052500000000004 * 1.052500000000004*
>>>> *1.3593541801778926*
>>>>
>>>> This yields a difference of:  
>>>>
>>>> *2.220446049250313e-16*
>>>>
>>>>
>>>>
>>>> This might not seem like much of a difference, but it is having an 
>>>> adverse effect on a project I am working on.  To compare, in Python, I 
>>>> don't see any difference:
>>>>
>>>> In[8]: 1.052500000000004 ** 6.0
>>>> Out[8]: 1.3593541801778926
>>>>
>>>> In[9]: 1.052500000000004 * 1.052500000000004 * 1.052500000000004 * 
>>>> 1.052500000000004 * 1.052500000000004 * 1.052500000000004
>>>> Out[9]: 1.3593541801778926
>>>>
>>>> I also see no difference in C++
>>>>
>>>>
>>>> Any idea why this might be?  Just an issue with floating point math?  
>>>> Is Julia actually being more accurate here?
>>>>
>>>> Thanks!
>>>>
>>>> Chris
>>>>
>>>
>

Reply via email to