[ 
https://issues.apache.org/jira/browse/NUMBERS-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17340157#comment-17340157
 ] 

Alex Herbert commented on NUMBERS-156:
--------------------------------------

Thanks for the benchmark.

I assume the ulp error for the unsafe method includes the cases where the 
result was not finite. The concept of ULP is invalid when comparing infinite to 
a finite. The effect is that the unsafe method is penalised for the cases which 
it cannot compute and this distorts the error scores for when it can compute 
the result. If you repeat the metric with those cases excluded I expect that 
the unsafe method, when the computation is possible, to have an error similar 
to the exact scaling method.

I find it strange the unsafe method is slower. I am attributing this to slower 
computations once the doubles have overflowed to non-finite as the computation 
does work to detect and then preserve the infinite representation, e.g. in the 
final Math.sqrt.

Are you suggesting something like:
{code:java}
public static final Norms {
    public double l2(double x);
    public double l2(double x, double y);
    public double l2(double x, double y, double z);
    public double l2(double[] x);
}
{code}
 
 In this case the 1D and 2D methods are fulfilled by Math.abs and Math.hypot 
respectively. So I think they are not required.

The l1 norm is a simple sum of the absolute values. This is trivial and would 
be safe (until the result is an overflow). Is there a use case for this?

The LInf norm is the max of the absolute values. So would require no special 
coding for under/overflow.

The p-norm is the pth root of the sum of the terms to the power p. This is 
harder to create a safe version using the method I proposed as the threshold 
limits are set based on over/underflow. So they are roughly:
||p||pow(Double.MAX_VALUE, 1/p)||pow(Double.MIN_NORMAL, 1/p)||
|2|1.3407807929942596E154|1.4916681462400413E-154|
|3|5.643803094122288E102|2.8126442852362986E-103|
|4|1.157920892373162E77|1.221338669755462E-77|
|5|4.4765466227572707E61|2.947602296969177E-62|
|6|2.3756689782295612E51|5.303436890579823E-52|
|7|1.0873965158377428E44|1.12103877145986E-44|
|8|3.4028236692093846E38|3.4947656141084224E-39|
|9|1.7804260956663496E34|6.55196552339639E-35|
|10|6.690699980388652E30|1.7168582635060987E-31|
|11|1.0547656064814833E28|1.0754166757288704E-28|
|12|4.874083481260411E25|2.3029192106063608E-26|
|13|5.151114421059686E23|2.15978793161348E-24|
|14|1.0427830626922086E22|1.0587911840678785E-22|
|15|3.550703192779518E20|3.089035946245968E-21|
|16|1.8446744073709552E19|5.91165426433957E-20|
|17|1.35715773748444416E18|7.994383288523771E-19|
|18|1.33432608295961504E17|8.094421241445487E-18|
|19|1.6746821753625154E16|6.423252137303782E-17|
|20|2.5866387417628795E15|4.1434988397562017E-16|

The upper threshold numbers should be divided by the number of terms so the sum 
of the terms does not overflow.

These can be precomputed for p along with an appropriate power of 2 scaling 
factor to scale up and down. But for any p above the precomputed table size a 
method would have to be created to generate the threshold and scaling factors 
dynamically.

Is there a use case for a safe implementation of the p-norm?

 

> SafeNorm 3D overload
> --------------------
>
>                 Key: NUMBERS-156
>                 URL: https://issues.apache.org/jira/browse/NUMBERS-156
>             Project: Commons Numbers
>          Issue Type: Improvement
>            Reporter: Matt Juntunen
>            Priority: Major
>
> We should create an overload of {{SafeNorm.value}} that accepts 3 arguments 
> to potentially improve performance for 3D vectors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to