[
https://issues.apache.org/jira/browse/NUMBERS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17706947#comment-17706947
]
Alex Herbert commented on NUMBERS-167:
--------------------------------------
The Boost implementation for the gamma functions included the ability to
compute the gradient at the same time as the function value. This would be the
pdf and cdf of the gamma (or Chi-squared) distribution at the same time. This
is used to create an efficient inverse function using a gradient based
optimiser. I removed the functionality to compute the gradient. It could be a
future work to add inverse functions that can be used within Statistics to
invert the CDF.
As for your use case, I can see that it is a benefit to precompute the log
gamma value for repeat calls. However I cannot say how much the performance
gain will be without getting refamiliar with all the code. I think this change
would also be of benefit to a few of the Statistics distributions. So the
ticket should not be closed just yet. It can remain open as a future challenge.
> RegularizedGamma.P with precomputed LogGamma value
> --------------------------------------------------
>
> Key: NUMBERS-167
> URL: https://issues.apache.org/jira/browse/NUMBERS-167
> Project: Commons Numbers
> Issue Type: Wish
> Components: gamma
> Reporter: Gilles Sadowski
> Priority: Minor
> Fix For: 1.2
>
> Attachments: pr_106.patch
>
> Time Spent: 0.5h
> Remaining Estimate: 0h
>
> We have
> {code:java}
> double v = RegularizedGamma.P.value(a, x);
> {code}
> where method {{value}} internally calls {{LogGamma.value(a)}}.
> There is a use-case for
> {code:java}
> double logGammaA = LogGamma.value(a);
> double v = RegularizedGamma.P.value(a, x, logGammaA);
> {code}
> for when the user varies {{x}} but not {{a}}.
> Method name TBD: Another overload of {{value}} may be confusing (?).
--
This message was sent by Atlassian Jira
(v8.20.10#820010)