Re: Lanczos, yes, it operates by finding V as you describe.  The user is
required to do more work to recapture U.  Practical reason is that the
assumption is numCols(A) = numFeatures which is much less than numRows(A) =
numTrainingSamples

On Nov 6, 2011 9:52 AM, "Sean Owen" <[email protected]> wrote:

Following up on this very old thread.

I understood all this bit, thanks, that greatly clarified.

You multiply a new user vector by V to project it into the new
"pseudo-item", reduced-dimension space.
And to get back to real items, you multiply by V's inverse, which is
its transpose.
And so you are really multiplying the user vector by V VT, which is
not a no-op, since those are truncated matrices and aren't actually
exact inverses (?)

The original paper talks about cosine similarities between users or
items in the reduced-dimension space, but, can anyone shine light on
the point of that? From the paper also, it seems like they say the
predictions are just computed as vector products as above.


Finally, separately, I'm trying to understand the Lanczos method as
part of computing an SVD. Lanczos operates on a real symmetric matrix
right? And am I right that it comes into play when you are computing
and SVD...

A = U * S * VT

... because U is actually the eigenvectors of (symmetric) A*AT and V
is the eigenvectors of AT*A? And so Lanczos is used to answer those
questions to complete the SVD?

On Fri, Jun 4, 2010 at 6:48 AM, Ted Dunning <[email protected]> wrote:
> You are correct.  The...

Reply via email to