It's embedded in both as far as I can tell, though I don't know enough about the implementation to say what the split is. Ted remarked once that it's usual to split sqrt(S) across both.
Dotting two vectors to get one rating is really just the process of matrix multiplication to recover a value from the approximate user-item matrix. A = U S V', and we have some truncated versions of those right matrices. Uk Sk V'k gives Ak, which is some approximation of the original A (the input) but with many new values filled in from which to make recommendations. The Uk and V'k actually already have Sk embedded. So to get one value in Ak is just a dot of a row of Uk with a column of V'k, per usual matrix multiplication. On Sun, Nov 6, 2011 at 11:53 PM, Lance Norskog <[email protected]> wrote: > This thread is about the class SVDRecommender, which uses an externally > created factorization to do recommendation. > > A: The Factorization classes do not extract the scaling diagonal matrix. Is > this embedded in the left or right matrix? Or spread across both? > B: Is there an explanation of why the dot product of two feature vectors > creates the preference? A paper or blog post? Or a paragraph in a reply? > > -- > Lance Norskog > [email protected] >
