On Thu, Aug 20, 2009 at 6:07 PM, Ted Dunning<[email protected]> wrote: > ordered. The idea that you can subtract ratings and get a sensible number > is not actually correct. In particular, when displaying recommended items, > you typically only display items with high estimated values or values that
Completely agree. Initially I had no notion of an 'estimatePreference()' method which would actually try to estimate the preference value in the right scale, etc. The algorithms would come up with some number indicating how good a recommendation was -- but it was not guaranteed to be an estimate of the actual preference. It was just some value which was higher when recommendations were better. Later I felt it useful for applications to have access to this functionality. All the current algorithms really do operate by trying to estimate the preference value, so it was actually pretty simple to redesign a few things to produce and use actual preference estimates. It is a restricting assumption, but does let the framework provide an 'estimatePreference()' function. This is useful in many use cases, and also enables an evaluation framework. It also lets it return some meaningful notion of how 'good' the recommendation is to the caller, instead of just an ordered list. Nevertheless I do agree there are drawbacks to making this assumption, and the estimate isn't necessarily great.
