Ok, thanks. Just wanted to confirm that it is a good practice. Sounds
like it is.



On Tue, May 24, 2011 at 6:21 PM, Ted Dunning <[email protected]> wrote:
> I couldn't possible comment.  If you were actually achieving the minimum,
> the results
> should have been the same subject to order and sign changes.
>
> On Tue, May 24, 2011 at 6:15 PM, Dmitriy Lyubimov <[email protected]> wrote:
>
>> the reason i am asking is because i actually did that very scheme some
>> time ago and for whatever reason i got very mixed results. I assumed
>> that was because i better be doing it incrementally.
>>
>>
>> On Tue, May 24, 2011 at 6:13 PM, Dmitriy Lyubimov <[email protected]>
>> wrote:
>> > Or incremental SVD provides orthogonality of the singular vectors
>> > while LLL does not? (my best guess why they do it incrementally).
>> >
>> > On Tue, May 24, 2011 at 6:11 PM, Dmitriy Lyubimov <[email protected]>
>> wrote:
>> >> Thanks, Ted.
>> >>
>> >> On Tue, May 24, 2011 at 5:44 PM, Ted Dunning <[email protected]>
>> wrote:
>> >>> Heh?
>> >>>
>> >>> Are you referring to the Log linear latent factor code that I have in
>> my
>> >>> mahout-525 github repo?
>> >>>
>> >>
>> >> I am referring to LatentLogLinear class in your repo under lll branch.
>> >>
>> >>>
>> >>>
>> >>>> However, i never understood why this factorization must come up with r
>> >>>> best factors. I understand incremental SVD approach
>> >>>> (essentially the same thing except learning factors iteratively
>> >>>> guarantees we capture the best ones) but if we do it all in parallel,
>> >>>> does it create any good in your trials?
>> >>>>
>> >>>
>> >>> I don't understand the question.
>> >>>
>> >>> Are you asking whether the random projection code finds the best
>> (largest)
>> >>> singular
>> >>> values and corresponding vectors?  If so, the answer is yes, it does
>> with
>> >>> high probability
>> >>> of low error.
>> >>>
>> >>
>> >> Well you have alternating scheme there, right? you do learn left
>> >> singular vectors, then you switch, find the right singular vectors,
>> >> but as far as i can tell you are not doing it the same way as
>> >> incremental SVD does
>> >>
>> >> Incremental SVD goes thru the entire dataset the same way but only for
>> >> 1 factor first. then it frozes it once testing rmse curve is flat and
>> >> starts doing the same for the second one. Intuitively it's clear that
>> >> the first pass this way finds the largest factor and the next one
>> >> finds the next largest etc. Hence there's a 'step' curve on RMSE chart
>> >> for this process as it switches from factor to factor.
>> >>
>> >> But in your case, it looks like you are learning all the factors at
>> >> once. Is it going to result into the same result as incremental SVD
>> >> algorithm? if yes, why did they even do it incrementally, for it's
>> >> clear incremental approach would require more iterations?
>> >>
>> >> (there's a mahout issue for incremental svd implementation btw).
>> >>
>> >>>
>> >>> Regarding the side information, I thought it was there but may have
>> made a
>> >>> mistake.
>> >>>
>> >>
>> >> which branch should i look at for the latest code? i looked at LLL
>> branch.
>> >> thanks.
>> >>
>> >
>>
>

Reply via email to