On 08/05/13 14:00, Vikas Kapur wrote:
Hi,
I calculated the precision using the below approach but getting weird and
strange results.

I tried to evaluate two algorithms with RMSE and PRECISION@5 metrics:
I found that Algo1 has lower RMSE and Precision value when compared with Algo2.
Isn't strange?
If Algo1 has lower RMSE then it should have higher precision ?


Not at all, RMSE & precision should have a certain degree of correlation, yes (after all, they are all metrics of fitness). But they may differ substantially, and they will in many cases.

RMSE computes the error in preference prediction for unknown items ('unknown' in the training set), so it measures the distance between what the engine thinks the user will rate and the actual user rating (depending on how you define 'rating' this may make more or less sense, for implicit datasets all you have is 0 and 1).

While precision@5 is concerned only on the relative order of the results. It doesn't matter if the distance in the preference estimation for 'good' items (items in the test set) is greater or smaller, as long as they reach the top 5 positions. If you want, it's a more 'global' measure, finding out if the final list delivered is right or not, while RMSE computes algorithm quality item by item.

None is worse of better in an universal sense, it depends on the context. For TopN problems, I tend to think that Precision@N works better as a figure of merit, but you could find situations in which it does not.




Reply via email to