Yes I think it's a good idea for the reason Gabriel gave. It's the best
answer to give. I'm reluctant to change this behavior at this point, as this
part of the code is more mature-ish and in use than others.

In the use case you reference, evaluation, there's already support for doing
this kind of testing automatically. The eval process will hold out data for
you and such. This approach is more accurate. Yes, I could leave the data in
but not have it use that info in estimatePreference() directly -- but then
that info was still used indirectly in other places like similarity
computations. The test becomes (in a smaller way) compromised anyway.

On Thu, Oct 21, 2010 at 3:56 AM, Lance Norskog <[email protected]> wrote:

> Since this is Recommender day, here is another kvetch:
>
> The recommender implementations with algorithms all do this in
> Recommender.estimatePreference():
>  public float estimatePreference(long userID, long itemID) throws
> TasteException {
>    DataModel model = getDataModel();
>    Float actualPref = model.getPreferenceValue(userID, itemID);
>    if (actualPref != null) {
>      return actualPref;
>    }
>    return doEstimatePreference(userID, itemID);
>  }
>
> Meaning: "if I told you something, just parrot it back to me."
> Otherwise, make a guess.
>
> I am doing head-to-head comparisons of the dataModel preferences v.s.
> the Recommender. This code makes it impossible to directly compare
> what the recommender thinks v.s. the actual preference. If I wanted to
> know what I told it, I already know that. I want to know what the
> recommender thinks.
>
> If this design decision is something y'all have argued about and
> settled on, never mind. If it is just something that seemed like a
> good idea at the time, can we change the recommenders, and the
> Recommender "contract", to always use their own algorithm?
>
> --
> Lance Norskog
> [email protected]
>

Reply via email to