[
https://issues.apache.org/jira/browse/NUMBERS-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17343107#comment-17343107
]
Alex Herbert commented on NUMBERS-156:
--------------------------------------
Thanks for updating the test with more details.
{quote} * {{direct}} somehow has the worst performance for numbers in the low
range. I'm not sure how that's possible.{quote}
The low range can underflow to zero. You should detect these cases as well as
the overflow to non-finite and discard them.
{quote} * It is interesting to note that {{enormMod}} has a lower accuracy
score on the mid-high and mid-low ranges, which straddle the scale
thresholds.{quote}
This is the situation where the following code has some magnitude in the second
term:
{code:java}
if (s1 != 0) {
return Math.sqrt(s1 + s2 * 0x1.0p-600 * 0x1.0p-600) * 0x1.0p600;
} else if (s2 != 0) {
return Math.sqrt(s2 + s3 * 0x1.0p-600 * 0x1.0p-600);
}
{code}
In this case the summation of the original input terms has had the ordered
changed. Small squares are summed together and large squares are summed
together, then combined, rather than all summed to one single sum. In both
cases (mid-high and mid-low) the mean ULP is below that of the direct method. I
would guess that the sum is suffering from missing a few bits that would be
added when rounding up to nearest as each term is added to a single sum.
This ULP test does suffer from using random inputs. Ideally you have one set of
random inputs that are in the range -10 to 10 for the exponent. These are then
scaled to straddle the scaling thresholds (e.g. up to 490 to 510 and down to
-510 to -490). You will then see the effect that the straddling of the scaling
threshold has on the accuracy.
For reference could you try the following version using [Kahan
summation|https://en.wikipedia.org/wiki/Kahan_summation_algorithm]:
{code:java}
/**
* @param v Cartesian coordinates.
* @return the 2-norm of the vector.
*/
public static double value(double[] v) {
// Sum of big, normal and small numbers with Kahan summation
double s1 = 0;
double s2 = 0;
double s3 = 0;
double c1 = 0;
double c2 = 0;
double c3 = 0;
for (int i = 0; i < v.length; i++) {
final double x = Math.abs(v[i]);
if (x > 0x1.0p500) {
// Scale down big numbers
final double y = square(x * 0x1.0p-600) - c1;
final double t = s1 + y;
c1 = (t - s1) - y;
s1 = t;
} else if (x < 0x1.0p-500) {
// Scale up small numbers
final double y = square(x * 0x1.0p600) - c3;
final double t = s3 + y;
c3 = (t - s3) - y;
s3 = t;
} else {
// Unscaled
final double y = square(x) - c2;
final double t = s2 + y;
c2 = (t - s2) - y;
s2 = t;
}
}
// The highest sum is the significant component. Add the next significant.
// Ignore computing the compensation.
if (s1 != 0) {
double y = s2 * 0x1.0p-600 * 0x1.0p-600 - c1;
return Math.sqrt(s1 + y) * 0x1.0p600;
} else if (s2 != 0) {
double y = s3 * 0x1.0p-600 * 0x1.0p-600 - c2;
return Math.sqrt(s2 + y);
}
return Math.sqrt(s3) * 0x1.0p-600;
}
/**
* Square the value.
*
* @param x the value
* @return x^2
*/
private static double square(double x) {
return x * x;
}
{code}
> SafeNorm 3D overload
> --------------------
>
> Key: NUMBERS-156
> URL: https://issues.apache.org/jira/browse/NUMBERS-156
> Project: Commons Numbers
> Issue Type: Improvement
> Reporter: Matt Juntunen
> Priority: Major
>
> We should create an overload of {{SafeNorm.value}} that accepts 3 arguments
> to potentially improve performance for 3D vectors.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)