[ 
http://issues.apache.org/jira/browse/MATH-157?page=comments#action_12431127 ] 
            
Tyler Ward commented on MATH-157:
---------------------------------


Forgive me if this seems like condescension, you can find this in any decent 
math book, and it is about one of the most well understood algorithms out 
there. However, since you're so concerned with clean room implementations, here 
I'll describe one from first principals, so you can be confident that its clean 
if someone implements it.

Here's the idea. Lets assume the matrix A is n by m, and A` would be 
A-transpose. n >= m, in this case, so A`A (m by m) is smaller than AA` (n by n).

For a matrix A, it can be broken down as A = USV`, where U and V are 
orthogonal, and S is diagonal.

Let look at eigenvectors for a moment. Eigenvector is a vector of the form Ab = 
ub, for some scalar (eigenvalue) u. This means that given an Eigenvector matrix 
D (which is orthogonal), where the eigenvectors make up the columns of D, this 
eigenvector equation can be written as AD = DZ, where Z is the diagonal matrix 
of the eigenvalues. Rearranging, D`AD = Z, the eigenvector matrices will 
diagonalize the matrix A. Much talk about the number of eigenvectors and their 
orthogonality can be found almost anywhere.

Now imagine A`A (A-transpose multiplied by A, a symmetrical matrix). Given that 
A = USV`, A`A = VSU`USV` = VSSV`. SS, is really just a diagonal matrix of 
values, lets call it (suggestively) Z.

A`A = VZV`, so if we can find the eigenvectors of A`A (which is symmetric), and 
the eigenvalues Z, then we can compute the elemenst of S (taking the square 
root of each element of Z).

Once we have V, and S, we can compute AV(S-inverse) = U, and get the other part 
of the decomposition. Inverting a diagonal matrix is easy (just invert each 
element). SVD has one special case. If a value in S is very small, set its 
inverse to zero (rather than infinity). This means that the corresponding 
columns of U are undefined. Simply fill them with values that maintain the 
orthoganality condition. There are many ways to do this. Zeroo all elements and 
then place a one in any position, provided that this does not become an exact 
copy of a previous column. It won't matter, as the column will be killed off by 
the zero in S, but many users of SVD rely on both of  the matrices to be 
orthogonal, even if there are missing singular values. A similar process may be 
needed on V as well. 

So, how do we find the eigenvectors? Easily enough.

1) Start by householder transforming a symmetric matrix into tridiagonal form 
by multiplying by householder matrixes H, like so A2 = H1A1H1`. This will 
reduce to tridiagonal form after N-2 such multiplications, producing Q1` = 
H1`H2`H3`....

2) Reduce to diagonal form using jacoby reductions with Givens plane rotations. 
Because of the simple form, literally mulitply it out by hand, and you'll see 
that only a few elements of A change wiith each reduction, and the formulas for 
the new values there are quite easy. Use this simplified code to reduce A. Each 
step here produces a Givens Plane Rotation matrix G, such that A2 = G1A1G1`. 
Again, Q2 = G1`G2`G3`.... These multiplications are easy, as only two rows and 
two columns of Q2 change with each new plane rotation. This means that Z = 
Q2Q1AQ1`Q2`, or that V = Q2Q1. A has been diagonalized when this process is 
finished.

3) Now that we have Z and V, the rest is just two matrix multiplications away.

4) Implicite shifts and matrix sorting (move larger values towards element 0, 
0) can be added to step two, but don't worry about that for the initial 
implementation. 

Once again, the givens reductions are really trivial. remember, a givens matrix 
is like this.....

1 0 0 0
0 r  s 0
0 -s r 0
0 0 0 1

with r^2 + s^2 = 1.0

Make sure that the box of values ends up at the correct position to wipe out 
one of the off diagonal elements (the one in the position of -s, here element 
3, 2). It's easy to compute which s and r are needed to eliminate the element, 
do it by hand and put the equation into the code. Do the multiplication by 
hand, and put those equations into the code as well. G1A1G1` can be multiplied 
out by hand, and only a half a dozen elements of A change, A is symmetrical, so 
you only have to consider half of those, something like 4 elements. Really 
easy. The off-diagonal elements will come back each time. compute a new matrix 
and wipe them out again. They will be smaller each time, and soon you'll reduce 
them down to the level where you can just throw them away. Perhaps when the off 
diagonal element divided by the diagonal is less than 10^-14 or so. Then make a 
new matrix and get the next elements. The next matrix after the one above would 
be this one.

1 0 0 0
0 1 0 0
0 0 r  s
0 0 -s r

And so on, get them one at a time. Remember to recompute a new R and S after 
each iteration (forward plus inverse multiplication to leave the matrix 
symmetrical). 

Ok, there you go. The complete algorithm. Efficient householder reductions, and 
this algorithm will be efficient. 

Recap...

1) A`A is symmetric. 
2) Produce matrix Q1 that reduces A`A to tridiagonal form (use householder 
reductions, just like QR, but multiply from right by H` as well).
3) Use givens reductions to reduce to diagonal form with Q2.
4) Make V = Q2Q1
5) Multiply out VA`AV` to produce Z, take sqrt of each element to produce S.
6) Compute U = AV(S^-1) (zero the inverse of any zero elements of S for this).
7) Add some ones to the zero columns of U to make U orthogonal. 

That's it.




> Add support for SVD.
> --------------------
>
>                 Key: MATH-157
>                 URL: http://issues.apache.org/jira/browse/MATH-157
>             Project: Commons Math
>          Issue Type: New Feature
>            Reporter: Tyler Ward
>
> SVD is probably the most important feature in any linear algebra package, 
> though also one of the more difficult. 
> In general, SVD is needed because very often real systems end up being 
> singular (which can be handled by QR), or nearly singular (which can't). A 
> good example is a nonlinear root finder. Often the jacobian will be nearly 
> singular, but it is VERY rare for it to be exactly singular. Consequently, LU 
> or QR produces really bad results, because they are dominated by rounding 
> error. What is needed is a way to throw out the insignificant parts of the 
> solution, and take what improvements we can get. That is what SVD provides. 
> The colt SVD algorithm has a serious infinite loop bug, caused primarily by 
> Double.NaN in the inputs, but also by underflow and overflow, which really 
> can't be prevented. 
> If worried about patents and such, SVD can be derrived from first principals 
> very easily with the acceptance of two postulates.
> 1) That an SVD always exists.
> 2) That Jacobi reduction works. 
> Both are very basic results from linear algebra, available in nearly any text 
> book. Once that's accepted, then the rest of the algorithm falls into place 
> in a very simple manner. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to