[
https://issues.apache.org/jira/browse/MAHOUT-817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13158210#comment-13158210
]
Dmitriy Lyubimov edited comment on MAHOUT-817 at 11/28/11 6:32 AM:
-------------------------------------------------------------------
Yes expectatiin is zero but variance is going to be big regardless of the input
*size I think unfortunately. So m Omega term is still a problem. For my
problems its brute force computation will actually take more than e.g. squaring
my input. So it was first thought but I don't think it is valid enough. So I
withdraw this for now.
But we may not have a choice for the big data though. And then again there's a
connection with power iterations. The basis doesn't have to be perfect and in
practice it never is, but power iterations improve it a lot. Power iterations
flow is here:
https://github.com/dlyubimov/mahout-commits/blob/ssvd-docs/Power%20Iterations.pdf?raw=true.
Now question is if this assumption is going to render power iteration flow
useless.
was (Author: dlyubimov):
Yes expectatiin is zero but variance is going to be big regardless of the
input *size I think unfortunately. So m Omega term is still a problem. For my
problems itsnbrute force computation will actually take more than e.g.
squaringn my input. So it was first thought but I don't think it is valid
enough. So I withdraw this for now.
But we may not have a choice for the big data though. And then again there's a
connection with power iterations. The basis doesn't have to be perfect and in
practice it never is, but power iterations improve it a lot. Power iterations
flow is here:
https://github.com/dlyubimov/mahout-commits/blob/ssvd-docs/Power%20Iterations.pdf?raw=true
> Add PCA options to SSVD code
> ----------------------------
>
> Key: MAHOUT-817
> URL: https://issues.apache.org/jira/browse/MAHOUT-817
> Project: Mahout
> Issue Type: New Feature
> Affects Versions: 0.6
> Reporter: Dmitriy Lyubimov
> Assignee: Dmitriy Lyubimov
> Fix For: Backlog
>
>
> It seems that a simple solution should exist to integrate PCA mean
> subtraction into SSVD algorithm without making it a pre-requisite step and
> also avoiding densifying the big input.
> Several approaches were suggested:
> 1) subtract mean off B
> 2) propagate mean vector deeper into algorithm algebraically where the data
> is already collapsed to smaller matrices
> 3) --?
> It needs some math done first . I'll take a stab at 1 and 2 but thoughts and
> math are welcome.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira