I'm suggesting using numbers like -1 for thumbs-down ratings, and then
using these as a positive weight towards 0, just like positive values
are used as positive weighting towards 1.

Most people don't make many negative ratings. For them, what you do
with these doesn't make a lot of difference. It might for the few
expert users, and they might be the ones that care. For me it was
exactly this... user acceptance testers were pointing out that
thumbs-down ratings didn't seem to have the desired effect, because
they saw the result straight away.

Here's an alternative structure that doesn't involve thumbs-down:
choose 4 items, and sample in a way to prefer items that are distant
in feature space. Ask the user to pick 1 that is most interesting.
Repeat a few times.

On Tue, Jun 18, 2013 at 3:55 PM, Pat Ferrel <p...@occamsmachete.com> wrote:
> To your point Ted, I was surprised to find that remove-from-cart actions 
> predicted sales almost as well as purchases did but it also meant filtering 
> from recs. We got the best scores treating them as purchases and not 
> recommending them again. No one pried enough to get get bothered.
>
> In this particular case I'm ingesting movie reviews, thumbs up or down. I'm 
> trying to prime the pump for a cold start case of a media guide app with 
> expert reviews but no users yet. Expert reviewers review everything so I 
> don't think there will be much goodness in treating a thumbs down like a 
> thumbs up in this particular case. Sean, are you suggesting that negative 
> reviews might be modeled as a "0" rather than no value? Using the Mahout 
> recommender this will only show up in filtering the negatives out of recs as 
> Ted suggests, right? Since a "0" preference would mean, don't recommend, just 
> as a preference of "1" would. This seems like a good approach but I may have 
> missed something in your suggestion.
>
> In this case I'm not concerned with recommending to experts, I'm trying to 
> make good recs to new users with few thumbs up or down by comparing them to 
> experts with lots of thumbs up and down.The similarity metric will have new 
> users with only a few preferences and will compare them to reviewers with 
> many many more. I wonder if this implies a similarity metric that uses only 
> common values (cooccurrence) rather than the usual log-likelihood? I guess 
> it's easy to try both.
>
> Papers I've read on this subject. The first has an interesting discussion of 
> using experts in CF.
> http://www.slideshare.net/xamat/the-science-and-the-magic-of-user-feedback-for-recommender-systems
> http://www.sis.pitt.edu/~hlee/paper/umap2009_LeeBrusilovsky.pdf

Reply via email to