Hi Ted,
Could you explain what do you mean by a "dithering step" and an
"anti-flood step"?
By dithering I guess you mean adding some sort of noise in order not
to show the same results every time.
But I have no clue about the anti-flood step.

Tevfik

On Sat, Jan 25, 2014 at 11:05 PM, Koobas <koo...@gmail.com> wrote:
> On Sat, Jan 25, 2014 at 3:51 PM, Tevfik Aytekin 
> <tevfik.ayte...@gmail.com>wrote:
>
>> Case 1 is fine, in case 2, I don't think that a dot product (without
>> normalization) will yield a meaningful distance measure. Cosine
>> distance or a Pearson correlation would be better. The situation is
>> similar to Latent Semantic Indexing in which documents are represented
>> by their low rank approximations and similarities between them (that
>> is, approximations) are computed using cosine similarity.
>> There is no need to make any normalization in case 1 since the values
>> in the feature vectors are formed to approximate the rating values.
>>
>> That's exactly what I was thinking.
> Thanks for your reply.
>
>
>> On Sat, Jan 25, 2014 at 5:08 AM, Koobas <koo...@gmail.com> wrote:
>> > A generic latent variable recommender question.
>> > I passed the user-item matrix through a low rank approximation,
>> > with either something like ALS or SVD, and now I have the feature
>> > vectors for all users and all items.
>> >
>> > Case 1:
>> > I want to recommend items to a user.
>> > I compute a dot product of the user’s feature vector with all feature
>> > vectors of all the items.
>> > I eliminate the ones that the user already has, and find the largest
>> value
>> > among the others, right?
>> >
>> > Case 2:
>> > I want to find similar items for an item.
>> > Should I compute dot product of the item’s feature vector against feature
>> > vectors of all the other items?
>> >    OR
>> > Should I compute the ANGLE between each par of feature vectors?
>> > I.e., compute the cosine similarity?
>> > I.e., normalize the vectors before computing the dot products?
>> >
>> > If “yes” for case 2, is that something I should also do for case 1?
>>

Reply via email to