Ted, could you please explain this point: > If you only have 20 categories, I would recommend that you consider using > different technologies than recommendations. Simply building 20 > classifiers is likely to be as effective or more so.
Suppose, we want to build a classifier to predict a category N as a "label". Аnd we train it on whole user data or on a representative sample. Then classifier believes that all combinations of features, which it met with users, not interested in N, imply that user not interested in N. So, if this classifier feats data well, it will give us no positive answers for users from same data, not interested in N yet. Isn`t it? This only possible after some time – when data significantly changes. Right? WBR Oleg > From: Ted Dunning <[email protected]> > To: "[email protected]" <[email protected]> > Cc: > Date: Wed, 6 Aug 2014 12:16:31 -0600 > Subject: Re: UserBasedRecommender question > If you only have 20 categories, I would recommend that you consider using > different technologies than recommendations. Simply building 20 > classifiers is likely to be as effective or more so.
