Hi Eike,

Eike Rathke schrieb:
Hi Regina,

I'll try to give some long outstanding answers to questions you asked
shortly before I went to OOoCon and then into vacation and then..

Hope you had a nice time in vacation. I'm pleased that you remember, that I had some questions.

On Thursday, 2010-08-26 22:33:20 +0200, Regina Henschel wrote:

next problem with matrices :(

(All with German local with comma as decimal delimiter)

Fill A1:C3 with
1       2       3
3       6       9
9,1     18      27
Calculate =MINVERSE(A1:C3)

I get
0,00000000-1.#NANE+000  #VALUE! #VALUE!
#NUM!   #NUM!   #VALUE!
#NUM!   #NUM!   #VALUE!

I got different results in OOO330m10 and DEV300m85, no error at all, and
even different in one value of the last column, being

OOO330m10 Solaris/x86:

28.1318681319             -3.5164835165                     10
-819855292164869000  273285097388290000  7.79926253788309E-015
546570194776579000  -182190064925526000          -3.3333333333

DEV300m85 Linux/x86:

28.1318681319             -3.5164835165            10
-819855292164869000  273285097388290000             0
546570194776579000  -182190064925526000 -3.3333333333

Of course both obviously look wrong. Difference of 0 vs.
7.79926253788309E-015 might be because of different compilers'
optimizations, though it looks suspicious. I assume you're working on
Windows. Would be good to know what exactly happens.

I guess, that the wrong notation in upper, left cell is already
tracked in issue 114125.

That looks related, though I don't know at the moment how that should
occur in Calc. We usually convert all INF and NAN to errors. Which
milestone did you use?

I see in Dev300m88.
#NUM!   #NUM!   #VALUE!
#NUM!   #NUM!   #VALUE!

But I think, Calc should not return #NUM!
or #VALUE! at all, but Err:502 (illegal argument), because the
matrix is singular.
The LU decomposition has a zero in the diagonal, so it is possible
to detect this case. Excel and Gnumeric return #NUM! in the whole

I ran that in a non-product debug build where the LU decomposition is
written to stderr,  there was no 0, which explains why singularity was
not detected. The code is in interpr5.cxx at line 767

     fprintf( stderr, "\n%s\n", "lcl_LUP_decompose(): LU");

and displayed

      9.1        18        27

     0.33     0.066     0.099

     0.11      0.33   1.8e-18

Can you compare that with your values?

I work on WinXP with cygwin. What do I have to do exactly? I have tried to build with 'sc> build debug=true' or with 'sc>build dbglevel=2' But I see no effect.

Then I have removed the 'OSL_DEBUG_LEVEL > 1' condition and called 'scalc.exe 2>&1' from within cygwin I sometimes get an output, sometimes not. I do not know how to force an output.
The times I get an output it is
        9.1     18      27
        0.33    0.066   0.099

        0.11    0.33    0

My own build is currently based on Dev300m86.
I have added a test to the end of 'static int lcl_LUP_decompose', before
 'return nSign;'

    bool bSingular=false;
    for (SCSIZE i=0; i<n && !bSingular; i++)
        bSingular = bSingular || ((mA->GetDouble(i,i))==0.0);
    if (bSingular)
        nSign = 0;

That catches the simple case of exact zero.

In ScInterpreter::ScMatInv() line 924 some possible checks are
documented, of which one is implemented but disabled because
a "reasonably sufficient error margin" would have to be found for
fInvEpsilon. That would then set errIllegalArgument. Maybe going into
detail there could solve the problem for MINVERSE.

If the user sees this result, he will be cautious. But it might be
hidden as intermediate part of a larger formula. So the user does
not notice that the result is totally wrong. LINEST needs
calculating an inverse matrix for the statistics, but does of cause
do not show the matrix but the statistics, so that the user might
not detect, that the values are wrong.
Gnumeric returns #ZAHL! errors and Excel returns the same wrong
values as Calc.

Should I test the intermediate results in LINEST to catch this cases
and return an error?

Do you have a recipe to detect such cases? An error would be way better
than wrong results..

Not really. It seems to belong to the hard problems.

In case of a matrix and its inverse it is possible to calculate the condition number of the matrix as ||A||*||A^-1||, where ||.|| denotes the maximum absolute row sum. Matrices with large condition number are likely singular or ill-conditioned. But there still is the problem what is "large".

In the meantime I have worked further on LINEST. I have tried QR decomposition instead of LU decomposition. I know it is more time consuming, but the accuracy is far better. For example something like =LINEST(B11:B16;{100|101|102|103|104|105}^{1;2;3;4}) to get a polynomial regression give results with 3 digit accuracy where our current version of LINEST totally fails.

Unfortunately the QR decomposition has problems with singular matrices too. Using column pivoting gives a R-matrix where you can compare the diagonal elements to the first one to detect singularity, again with some magic epsilon. But with column pivoting the result can no longer be used for linear regression. I don't know, whether such comparing of diagonal elements of the matrix R is also possible, if the columns are not swapped.

On the other hand the increase in accuracy is so large, that I will implement it, if you do not have better ideas.

If the problem can be solved it would be worth to factor the code of
MINVERSE out to a general matrix inversion routine that can be used in
LINEST and maybe others.

That would be a problem for a term paper. There exist other method--I think of singular value decomposition or iterative methods--but I currently do not know them. Have a look at the algorithm of SVD in Numerical Recipes end of chapter 2.6, and you might understand my problem.

Kind regards

To unsubscribe, e-mail: dev-unsubscr...@sc.openoffice.org
For additional commands, e-mail: dev-h...@sc.openoffice.org

Reply via email to