Hello Eike,

all these tricks are applicable under slightly different conditions. I am no expert and can't tell when to apply the first one and when the second one. That's the domain of numerical analysts.

Sometimes it is even not important to be accurate during intermediary operations, e.g.: IF we compute both x and y slightly inaccurate, but the inaccuracy is similar, then we may well compute (x - y) quite accurate [or (x/y) accurate]. However, for this trick to work, one needs to know to optimise the x-calculations and the y-calculations in such a way, that their inaccuracy cancels out (that the 2 partial results are *similarly* inaccurate in the *same direction*). This is actually what numerical analysts do most of the time to increase the accuracy of a calculation.

Another feature I noticed in the R-code:
- it consistently avoids combining terms of the form (x) and (1-x),
  because inaccuracies would not cancel out
- rather, it re-groups terms so that to always have either x or 1-x
   [or equivalent variables]
- e.g.: a - (a + b) * x
- and calculate all terms like this, so that inaccuracies will be equivalent
  [and cancel later out]

There are surely many other tricks that I do not understand.

Eike Rathke wrote:
Hi Leonard,

On Wednesday, 2008-08-06 00:47:00 +0300, Leonard Mada wrote:

It seems to be one of the most difficult functions. To get an idea of that function, have a look at the file toms708.c of the R-project.
There is some hidden beauty in that file. ;)

Without being of much help, I just want to draw attention to some features I spotted during a peak in the code:
- often, (1-x) is decomposed into more stable terms,
- from the simple 0.5 - x -0.5
  [and x - 0.5 - 0.5 (for x-1)],
- to more complex terms like:
   return w * (0.5 - 1.0 / w + 0.5);
  [this is actually w - 1]

Indeed, nice tricks. Though the second one also already has some
cancellation error for w=0.99999 and the result (in C/C++) bitwise is
not identical to w-1, but that's after the 20th decimal digit in this
case, neglectable.

The code was well written by numerical analysts, and optimised for accuracy over a large domain of values. Is it feasible to use this library

Unfortunately not, R is licensed under GPL.

In the meantime, I think that this licensing issue is one of the biggest impediments in open-source. I still think that finding a proper library is probably the better way.

Sincerely,

Leonard


or a similar library (if one is found)?

The license would have to be compatible with LGPLv3 and pass Sun Legal
department.

Rewriting this code isn't going to be easy.

I'd say for a non-totally-expert on that field a true rewrite is nearly
impossible.

  Eike



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to