Have you found Unums to more a probable solution or more of an exploratory 
project?

On Saturday, October 17, 2015 at 1:27:44 PM UTC-4, Tom Breloff wrote:
>
> Jeffrey:  If you're building your own floating point library specifically 
> geared toward ensuring exact results when mathematically possible, then I 
> highly recommend you read up on Unums and come collaborate with me: 
> https://github.com/tbreloff/Unums.jl.  There's a bunch of info/links in 
> the wiki there, as well as interesting conversations in the issues.  See 
> issue #6 for the current design/plan.  And of course if you're interested 
> you should read John's book: 
> https://www.crcpress.com/The-End-of-Error-Unum-Computing/Gustafson/9781482239867,
>  
> which is fairly easy and clear reading.
>
> On Sat, Oct 17, 2015 at 11:26 AM, John Gibson <johnf...@gmail.com 
> <javascript:>> wrote:
>
>> Ok, I see from github that you're working on a Float125 and Float127 
>> implementation. Why not Float128?, and why not use Julia BigFloats? Out of 
>> curiosity, I did a few tests on Julia's sin(BigFloat).  
>>
>>
>> julia> p=Float64(pi)
>> 3.141592653589793
>>
>> julia> length(string(p))
>> 17
>>
>> julia> sin(p)
>> 1.2246467991473532e-16
>>
>> Here the error is O(eps_machine), as it should be, since sin(x) is 
>> well-conditioned. But for BigFloat
>>
>> julia> p=BigFloat(pi)
>>
>> 3.141592653589793238462643383279502884197169399375105820974944592307816406286198
>>
>> julia> length(string(p))
>> 80
>>
>> julia> sin(p)
>>
>> 1.096917440979352076742130626395698021050758236508687951179005716992142688513354e-77
>>
>> That's two orders of magnitude larger than machine epsilon 1e-79 
>> (subtracting 1 from 80 because of the period). Is this what's troubling you?
>>
>> And a side note: unless I'm missing something, the Julia docs on BigFloat 
>> seem misleading or ambiguous. BigInt and BigFloat are discussed together as 
>> arbitrary precision arithmetic, though if BigInt just wraps GNU MFPR, it's 
>> large but finite precision. Right?
>>
>> John
>>
>> On Saturday, October 17, 2015 at 9:39:54 AM UTC-4, Jeffrey Sarnoff wrote:
>>>
>>> thanks, good pointer. 
>>>
>>> I am working on routines for a  double-double-like floating point type.
>>> Straight double-double implementations have very nice arithmetic 
>>> properties; in my experimentation, most double-double trig routines resolve 
>>> fewer bits than I want.
>>>
>>> I want to take as much advantage of the built-in elementary functions as 
>>> possible -- the outer representation is a non-overlapping pair of Float64s
>>> where the available value is their implicit extended precision sum: 
>>> hipart + lopart.   It would have been spectacular to use sin(x + dx) = 
>>> sin(x)*cos(dx) + cos(x)*sin(dx)
>>> and find sin(hipart + lopart) using only built-in trig and the module's 
>>> extended precision arithmetic: 
>>> sin(hipart)*cos(lopart)+cos(hipart)*sin(lopart).
>>> Unfortunately, that requires much more than double precision to work 
>>> well.   
>>>
>>> I have trig working at ~100 sigbits for angles in -2pi..2pi, but too 
>>> slowly.  
>>>
>>>
>>> On Saturday, October 17, 2015 at 9:12:38 AM UTC-4, John Gibson wrote:
>>>
>>>> Search for the comment that begins "OK kiddies, time for the pros...." 
>>>> in 
>>>> http://stackoverflow.com/questions/2284860/how-does-c-compute-sin-and-other-math-functions
>>>>
>>>> John
>>>>
>>>> On Saturday, October 17, 2015 at 9:00:35 AM UTC-4, John Gibson wrote:
>>>>>
>>>>> Why are you trying to roll your own sin(x) function? I think you will 
>>>>> be hard pressed to improve on the library sin(x) in either speed or 
>>>>> accuracy.
>>>>>
>>>>> John
>>>>>
>>>>> On Saturday, October 17, 2015 at 3:38:17 AM UTC-4, Jeffrey Sarnoff 
>>>>> wrote:
>>>>>>
>>>>>> I had tried to find a clean way to jump into the taylor series using 
>>>>>> the well approximated sin(x) or cos(x) and so additionally limit the 
>>>>>> terms 
>>>>>> used -- there may be / probably is a way in concert with an additional 
>>>>>> tabulation (which would be fine in this case).  Taylor's theorem is not 
>>>>>> numerically crisp, so while I can identify the next term (using, perhaps 
>>>>>> eps(sin(x))/3) I don't know how to back out the delta between 
>>>>>> accumulation 
>>>>>> of the series to the prior term  and the value sin(x::Float64).
>>>>>>
>>>>>> On Saturday, October 17, 2015 at 3:29:28 AM UTC-4, Jeffrey Sarnoff 
>>>>>> wrote:
>>>>>>>
>>>>>>> That has promise, Kristoffer.  I did port something of that nature, 
>>>>>>> expecting it to work well -- but there was some numerical mush in more 
>>>>>>> than 
>>>>>>> a couple of trailing bits in some cases.
>>>>>>> Using more terms did not help. Thinking about it just now, it might 
>>>>>>> be more robustly stable if I  expand in one direction only upfrom or 
>>>>>>> downfrom some pretabulated points.
>>>>>>>
>>>>>>> On Saturday, October 17, 2015 at 2:47:57 AM UTC-4, Kristoffer 
>>>>>>> Carlsson wrote:
>>>>>>>>
>>>>>>>> Use a truncated Taylor series around the point maybe? 
>>>>>>>
>>>>>>>
>

Reply via email to