That was an interesting exchange.

I agree that there's probably no immediate benefit from translating the 
libm into julia, but as an exercise it might have some indirect benefits:
* It could provide a sort of "gold standard" of julia numeric code for 
people who are interested in poking around and learning about floating 
point numbers.
* It might encourage us to develop some useful idioms for cases where you 
need just a few extra bits of precision: this is a common trick used in the 
openlibm code, and full double-double arithmetic would be overkill (Jiahao 
mentioned he was also interested in this for some matrix operations, so I'm 
hoping to look at next week while I'm in Boston).
* We could borrow parts to implement more functions: e.g. a pow1p function 
for (1+x)^y
* It might be possible modify the code to take advantage of native fma 
instructions, as they start to become more prevalent. I think a prime 
candidate here is the pow function, as it does a lot of Horner evaluations 
and extra precision arithmetic.

I have a few bits and pieces that I've translated while playing around. In 
case anyone else is interested, I've put them together here:
https://github.com/simonbyrne/libm.jl

simon

On Thursday, 28 August 2014 22:28:57 UTC+1, Stefan Karpinski wrote:
>
> Interesting. I'm not sure how bad the function call overhead in JavaScript 
> is (or why it would be bad), but for us it's the same as one C function 
> calling another C function. The main advantages of reimplementing things in 
> pure Julia are: (a) generalization – same algorithm can apply to many 
> different types and/or (b) the ability to have the call inlined into 
> callers. I'm not sure if sin is the kind of function that benefits much 
> from inlining. The fdlibm algorithm is going to be pretty type-specific, so 
> generality is unlikely to benefit.
>
>
> On Thu, Aug 28, 2014 at 5:05 PM, Jason Merrill <[email protected] 
> <javascript:>> wrote:
>
>> If you love the details of floating point math, may I humbly recommend 
>> reading
>>
>> https://code.google.com/p/v8/issues/detail?id=3006
>>
>> This isn't so Julia related, per se, but I find it fascinating, and can't 
>> think of a better place to find other people who might also.
>>
>> Here's my TLDR:
>>
>> 1. Someone working on v8 wants to get rid of the overhead of calling out 
>> to C, so they implement floating point trig functions in pure JS.
>> 2. Because of haphazard range reduction, the new code sometimes gives 
>> very different answers than the old glibc code. The worst, in my mind, is 
>> sin(-1e-18) now returns 0 instead of -1e-18. This shipped! Try 
>> Math.sin(-1e-18) in the version of Chrome you're probably using to read 
>> this right now.
>> 3. Someone else complains about this, and is nearly shouted down over 
>> claims of "the spec allows it."
>> 4. Complainer implements a prototype straight port of fdlibm trig 
>> functions in pure JS, which gives both speed and good answers.
>> 5. Original implementer sees the light, and works to shepherd an 
>> implementation of this idea into a pull request.
>>
>> I was on the edge of my seat the whole time!
>>
>> And to try to relate this to Julia... would it be a crazy idea to 
>> reimplement some of libm (e.g. trig functions) in Julia, as the v8 team has 
>> chosen to do for JS here? Is there a reason that v8 has much higher 
>> overhead to call out to C for this than Julia does?
>>
>
>

Reply via email to