[
https://issues.apache.org/jira/browse/SPARK-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200470#comment-14200470
]
Sean Owen commented on SPARK-4231:
----------------------------------
Yes I'm mostly questioning implementing this in examples. The definition in
RankingMetrics looks like the usual one to me -- average from 1 to min(# recs,
# relevant items). You could say the version you found above is 'extended' to
look into the long tail (# recs = # items), although the long tail doesn't
affect MAP much. Same definition, different limit.
precision@k does not have the same question since there is one k value, not
lots.
AUC may not help you if you're comparing to other things for which you don't
have AUC. It was a side comment mostly.
(Anyway there is already an AUC implementation here which I am trying to see if
I can use.)
> Add RankingMetrics to examples.MovieLensALS
> -------------------------------------------
>
> Key: SPARK-4231
> URL: https://issues.apache.org/jira/browse/SPARK-4231
> Project: Spark
> Issue Type: Improvement
> Components: Examples
> Affects Versions: 1.2.0
> Reporter: Debasish Das
> Fix For: 1.2.0
>
> Original Estimate: 24h
> Remaining Estimate: 24h
>
> examples.MovieLensALS computes RMSE for movielens dataset but after addition
> of RankingMetrics and enhancements to ALS, it is critical to look at not only
> the RMSE but also measures like prec@k and MAP.
> In this JIRA we added RMSE and MAP computation for examples.MovieLensALS and
> also added a flag that takes an input whether user/product recommendation is
> being validated.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]