[
https://issues.apache.org/jira/browse/MAHOUT-24?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12638662#action_12638662
]
Ted Dunning commented on MAHOUT-24:
-----------------------------------
Alexander,
When you say invert, I hope that the inversion is not done explicitly. That is
almost never a good idea.
If you need to compute \theta = inv(A) * b, then you normally use a
decomposition such as A = Q R where the components of the decomposition have
special properties. In the case of LU decomposition, L is lower triangular, U
is upper triangular. For QR, Q is orthonormal, R is right hand triangular (aka
upper triangular). For QR, \theta = inv(R) * (inv(Q) * b). This turns out
more efficient and vastly more accurate than simply inverting A, especially if
you have repeated b's.
For cases where is constructed by special means, then you may want to do
decompositions on the components of A. In particular, decomposing X' X is
often a very inaccurate step. If instead we decompose using SVD, X = U s V'
(for example), then X' X = V s^2 V' and then \theta = V' s^-2 V b.
For each problem, though, you have to analyze what kind of solution algorithm
is a good one.
> Skeletal LWLR implementation
> ----------------------------
>
> Key: MAHOUT-24
> URL: https://issues.apache.org/jira/browse/MAHOUT-24
> Project: Mahout
> Issue Type: New Feature
> Environment: n/a
> Reporter: Samee Zahur
> Attachments: LWLR.patch.tar.bz2
>
>
> This is a very skeletal but functional implementation for LWLR. It outputs n
> lines where n is the number of dimensions. ith line = sum(x[i]*x[ind]) where
> ind is the index of independant variable. So the actual gradient = 2nd
> line/1st line for the classical 2D.
> Contains a single small test case for demonstration.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.