[
https://issues.apache.org/jira/browse/MAHOUT-24?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12638587#action_12638587
]
Alexander Hans commented on MAHOUT-24:
--------------------------------------
In the paper the x[i] are vectors. The matrix that needs to be inverted is A =
X^T * X. X contains the input values, one row per input pattern, resulting in a
M x N matrix, M being the number of input patterns and N being the number of
input dimensions. Matrix A is a N x N matrix, which should usually be easily
invertable. The paper proposes to parallelize the computation of A and b, which
are then used to determine the coefficients \theta = inv(A) * b. In addition,
it would make sense not to just return \theta but use it to make a prediction
for y, since this is what one is looking for. Using \theta another time would
not be useful, as the weights w[i] depend on the x in input space that the
prediction should be determined for.
Maybe I got something complete wrong, but so far it somewhat does make sense to
me. You can find some information on LWLR in `The Elements of Statistical
Learning' by Hastie, Tibshirani, and Friedman.
If no one else has already looked into this, I will take a look at Samee's code
and see what needs to be done to make it work.
> Skeletal LWLR implementation
> ----------------------------
>
> Key: MAHOUT-24
> URL: https://issues.apache.org/jira/browse/MAHOUT-24
> Project: Mahout
> Issue Type: New Feature
> Environment: n/a
> Reporter: Samee Zahur
> Attachments: LWLR.patch.tar.bz2
>
>
> This is a very skeletal but functional implementation for LWLR. It outputs n
> lines where n is the number of dimensions. ith line = sum(x[i]*x[ind]) where
> ind is the index of independant variable. So the actual gradient = 2nd
> line/1st line for the classical 2D.
> Contains a single small test case for demonstration.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.