So when comparing within a technique AAD or RMS is fine but when comparing
across recommenders using a variety of similarities its best to stick
to IR measures.



On 25 October 2011 18:52, Sean Owen <[email protected]> wrote:
> It's fairly meaningless, as there are no prefs in this case, so no
> such thing as estimated prefs to compare against real ones.
> The recommender does rank on a metric, but it's not estimate pref in
> this case. I imagine it will spit out a number but it's not going to
> be of much use.
>
> All you can really do here is use precision/recall tests.
>
> On Tue, Oct 25, 2011 at 6:50 PM, lee carroll
> <[email protected]> wrote:
>> What does the metric returned by
>> AverageAbsoluteDifferenceRecommenderEvaluator mean for non rating
>> based recommenders.
>>
>> The Mahout in action book describes the metric as being the amount a
>> prediction would differ from the actual rating. (Lower the better)
>> But what does that mean in terms of a recommender which uses a
>> similarity measure which does not use rating data, such as jaccard
>> or for that matter measures which use rank.
>>
>> Example:
>> Say we get a 1.2 AAD for a recommender using Euclidean distance.
>> Ratings range from 1 to 10 so i'm thinking this is pretty good, we are
>> out by a little over 1. We will make the mistake of
>> thinking around 6 or 8 when its the actual preference is a seven.
>>
>> But
>>
>> What does a 1.3 AAD for a Tanimoto using recommender mean? and can I
>> compare it with other recommender AAD's? (I'm sure you can, as the
>> excellent mahout book does :-)
>>
>> What am I missing? do I have a to simplistic view of the metric of AAD?
>>
>> Thanks in advance Lee C
>>
>

Reply via email to