Hello all,

this mail is about
 BETADIST(x,alpha,beta,lower bound, upper bound, cumulative)

I have attached the actual stage of my work to issue 91547. The patch has already a status that it could be used, but there are still problems. I have attached it anyway, so that you can have a look at it, test it and report defects. Perhaps someone has a good idea to solve one of the problems. Anyway, the algorithms are better than the current implementation in normal cases.

Problems:
(1)
The definition in OpenDocument-formula-20080618.odt in 6.17.7 has errors in the "density" case. Eike, in addition to the document I already sent to you: The definition does not state, what result should give
  BETADIST(1,1,beta,0,1,false()) for beta < 1
  BETADIST(0,alpha,1,0,1,false()) for alpha < 1
In both cases there is a pole. I set "illegal argument" now.

(2)
Nearly all terms inside have parts like (1-x)^r. When the x argument is close to 1, approximately x > 0.9999, then the term (1-x)^r has large cancellation errors. I know no way to avoid it. Switching to power series 1+x^2+x^3+... is no solution, because it nearly do not converge for x near 1.

That leads to the question: Which accuracy should the function have in which part of the domain? My suggestion would be, in case x > 0.9999 not to try to get more accuracy but document the loss of accuracy in the application help.

(3)
For x near alpha/(alpha+beta), which is mean p, the loops need huge amount of iterations. I cound more than 50000 in some cases. Currently the algorithm allows this 50000 iterations, the accuracy reaches up to 12 digits. Limit the number of iterations to a reasonable value looses accuracy in that cases. The normal amount of iterations is below 50.

I tried to shift up and down - like I_x(alpha+1,b) -, but then the accuracy decreases. The problem gets worse when alpha is large and beta small, which gives a mean near 1 and the problem (2) hits in addition.

If someone knows a solution that gives more accuracy with less iterations, please let me know. I failed with the algorithm BASYM from Didonato likely because of the needed erfc function. In a test as BASIC macro I got only 6 digit accuracy.

There will be a new book [1] about the numeric of special functions end of July, and I hope to find some solutions there. But till I get the book via public library, and read it, and test algorithms, it will be to late to get a solution into OOo3.1.

What to do? Setting a lower limit? Implement some shifting, which will decrease the iterations in many cases, but give less accuracy?

Return the reached values although they are not as accurate as others, or set a "no convergence" error?


(4)
Which values of alpha and beta should be supported? The larger they are the smaller is the range in which the result goes from near 0 to near 1. So one machine number for x would cover a large range of "correct" results. I have no experience in using BETADIST in real life, but I doubt that something like alpha=20000 is really needed.


(5)
The spec says that the "Cumulative" parameter has type "logical". In which type is it pushed to the stack? How shall I get it from the stack?


No problems, but ToDo's:
(6)
Adapt the algorithms to the solution concerning expm1 and log1p.

(7)
Remove the part with ScTTT, which I have included for testing.

(8)
Write a spec, and change UI and application help for the sixth parameter.

(9)
The patch contains algorithms for the Beta function in normal and logarithmic version. They are needed for BETADIST. If there will be a spec, both functions can be brought to UI easily. Currently they are only mentioned in the "huge" group in OpenDocument-formula-20080618.odt.


kind regards
Regina


[1] http://www.amazon.de/Numerical-Methods-for-Special-Functions/dp/0898716349/ref=sr_1_2?ie=UTF8&s=books-intl-de&qid=1216406061&sr=1-2

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to