I wasn't part of the arguing about and settling on this, but I still think it's the right behavior. Firstly, the point of a recommender is to maximize accuracy, so it makes sense to return the true rating if it's known. Secondly, the only time when actualPref would be non-null is when you're testing on your training data; in any valid test, the data points would be unseen and thus actualPref would always be null. So I think that code is just there for real-world cases when you have to display predicted ratings for any available item, not just items from the test block, in which case you want to display the user's actual rating for items that the user has rated.

But more to the point, the only time there might be true ratings
On 10/21/10 10:56 AM, Lance Norskog wrote:
Since this is Recommender day, here is another kvetch:

The recommender implementations with algorithms all do this in
Recommender.estimatePreference():
  public float estimatePreference(long userID, long itemID) throws
TasteException {
     DataModel model = getDataModel();
     Float actualPref = model.getPreferenceValue(userID, itemID);
     if (actualPref != null) {
       return actualPref;
     }
     return doEstimatePreference(userID, itemID);
   }

Meaning: "if I told you something, just parrot it back to me."
Otherwise, make a guess.

I am doing head-to-head comparisons of the dataModel preferences v.s.
the Recommender. This code makes it impossible to directly compare
what the recommender thinks v.s. the actual preference. If I wanted to
know what I told it, I already know that. I want to know what the
recommender thinks.

If this design decision is something y'all have argued about and
settled on, never mind. If it is just something that seemed like a
good idea at the time, can we change the recommenders, and the
Recommender "contract", to always use their own algorithm?

Reply via email to