On Tue, Jan 8, 2013 at 7:17 PM, Koobas <[email protected]> wrote:

>
>
> On Tue, Jan 8, 2013 at 6:41 PM, Sean Owen <[email protected]> wrote:
>
>> There's definitely a QR decomposition in there for me since solving A
>> = X Y' for X  is  X = A Y (Y' * Y)^-1  and you need some means to
>> compute the inverse of that (small) matrix.
>>
>>
> Sean,
> I think I got it.
> 1) A Y is a handful of sparse matrix-vector products,
> 2) Y' Y is a dense matrix-matrix on a "flat" matrix and a "tall" matrix,
> producing a small square matrix,
> 3) inverting that matrix is not a big deal, since it is small.
> Great!
> Thanks!
> It just was not immediately obvious to me at first look.
>
> Now, the transition from ratings to 1s and 0s,
> is this simply to handle implicit feedback,
> or is this for some other reason?
>
> Okay, I got a little bit further in my understanding.
The matrix of ratings R is replaced with the binary matrix P.
Then R is used again in regularization.
I get it.
This takes care of the situations when you have user-item interactions,
but you don't have the rating.
So, it can handle explicit feedback, implicit feedback, and mixed (partial
/ missing feedback).
If I have implicit feedback, I just drop R altogether, right?

Now the only remaining "trick" is Tikhonov regularization,
which leads to a couple of questions:
1) How much of a problem overfitting is?
2) How do I pick lambda?
3) How do I pick the rank of the approximation in the first place?
    How does the overfitting problem depend on the rank of the
approximation?


>
>> On Tue, Jan 8, 2013 at 5:27 PM, Ted Dunning <[email protected]>
>> wrote:
>> > This particular part of the algorithm can be seen as similar to a least
>> > squares problem that might normally be solved by QR.  I don't think that
>> > the updates are quite the same, however.
>> >
>> > On Tue, Jan 8, 2013 at 3:10 PM, Sebastian Schelter <[email protected]>
>> wrote:
>> >
>> >> This factorization is iteratively refined. In each iteration, ALS first
>> >> fixes the item-feature vectors and solves a least-squares problem for
>> >> each user and then fixes the user-feature vectors and solves a
>> >> least-squares problem for each item.
>> >>
>>
>
>

Reply via email to