yes i was asking about lll branch in your repo.

-d

On Tue, May 24, 2011 at 5:44 PM, Ted Dunning <[email protected]> wrote:
> Heh?
>
> On Tue, May 24, 2011 at 5:12 PM, Dmitriy Lyubimov <[email protected]> wrote:
>
>> so you do in-memory latent factor computation? I think this is the
>> same technique Koren implied for learning latent factors.
>>
>
> Are you referring to the Log linear latent factor code that I have in my
> mahout-525 github repo?
>
> Or SVD (L-2 latent factor computations)?
>
> Or LDA (multinomial latent factor computation)?
>
> Or Dirichlet process clustering?
>
> Or ...?
>
>
>> However, i never understood why this factorization must come up with r
>> best factors. I understand incremental SVD approach
>> (essentially the same thing except learning factors iteratively
>> guarantees we capture the best ones) but if we do it all in parallel,
>> does it create any good in your trials?
>>
>
> I don't understand the question.
>
> Are you asking whether the random projection code finds the best (largest)
> singular
> values and corresponding vectors?  If so, the answer is yes, it does with
> high probability
> of low error.
>
> But this doesn't sound like LLL stuff.
>
>
>> also i thought that cold start problem is helped by the fact that we
>> learn weights first so they always give independent best
>> approximation, and then user-item interactions reveal specific about
>> user and item. However, if we learn them all at the same time, it does
>> not seem obvious to me that we'd be learning best approximation when
>> latent factors are unkown
>> (new users). Also, in that implementation i can't see side info
>> training at all -- is it there?
>>
>
> This sounds like LLL again.
>
> In LLL, the optimization of side information coefficients and the
> optimization of the user-item interactions are separately convex, but not
> jointly convex.  This means that you pretty much have to proceed by
> optimizing one, then the other, then the first and so on.
>
> So I don't see how we are learning them all at the same time.
>
> You are correct, I think that having lost joint convexity that we don't have
> strong convergence guarantees, but practically speaking, we get
> convergence.
>
> Regarding the side information, I thought it was there but may have made a
> mistake.
>
> Sorry for being dense about your message.
>

Reply via email to