Yes the model has no room for literally negative input. I think that
conceptually people do want negative input, and in this model,
negative numbers really are the natural thing to express that.

You could give negative input a small positive weight. Or extend the
definition of c so that it is merely small, not negative, when r is
negative. But this was generally unsatisfactory. It has a logic, that
even negative input is really a slightly positive association in the
scheme of things, but the results were viewed as unintuitive.

I ended up extending it to handle negative input more directly, such
that negative input is read as evidence that p=0, instead of evidence
that p=1. This works fine, and tidier than an ensemble (although
that's a sound idea too). The change is quite small.

Agree with the second point that learning weights is manual and
difficult; that's unavoidable I think when you want to start adding
different data types anyway.

I also don't use M/R for searching parameter space since you may try a
thousand combinations and each is a model build from scratch. I use a
sample of data and run in-core.

On Tue, Jun 18, 2013 at 2:30 AM, Dmitriy Lyubimov <dlie...@gmail.com> wrote:
> (Kinda doing something very close. )
>
> Koren-Volynsky paper on implicit feedback can be generalized to decompose
> all input into preference (0 or 1) and confidence matrices (which is
> essentually an observation weight matrix).
>
> If you did not get any observations, you encode it as (p=0,c=1) but if you
> know that user did not like item, you can encode that observation with much
> more confidence weight, something like (p=0, c=30) -- actually as high
> confidence as a conversion in your case it seems.
>
> The problem with this is that you end up with quite a bunch of additional
> parameters in your model to figure, i.e. confidence weights for each type
> of action in the system. You can establish that thru extensive
> crossvalidation search, which is initially quite expensive (even for
> distributed machine cluster tech), but could be incrementally bail out much
> sooner after previous good guess is already known.
>
> MR doesn't work well for this though since it requires  A LOT of iterations.
>
>
>
> On Mon, Jun 17, 2013 at 5:51 PM, Pat Ferrel <pat.fer...@gmail.com> wrote:
>
>> In the case where you know a user did not like an item, how should the
>> information be treated in a recommender? Normally for retail
>> recommendations you have an implicit 1 for a purchase and no value
>> otherwise. But what if you knew the user did not like an item? Maybe you
>> have records of "I want my money back for this junk" reactions.
>>
>> You could make a scale, 0, 1 where 0 means a bad rating and 1 a good, no
>> value as usual means no preference? Some of the math here won't work though
>> since usually no value implicitly = 0 so maybe -1 = bad, 1 = good, no
>> preference implicitly = 0?
>>
>> Would it be better to treat the bad rating as a 1 and good as 2? This
>> would be more like the old star rating method only we would know where the
>> cutoff should be between a good review and bad (1.5)
>>
>> I suppose this could also be treated as another recommender in an ensemble
>> where r = r_p - r_h, where r_h = predictions from "I hate this product"
>> preferences?
>>
>> Has anyone found a good method?

Reply via email to