Exactly, ranking is the only task of a recommender. Precision is not
automatically good at that but something like MAP@k is.
From: Marco Goldin
Date: May 10, 2018 at 10:09:22 PM
To: Pat Ferrel
Cc:
Very nice article. And it gets much clearer the importance of treating the
recommendation like a ranking task.
Thanks
Il gio 10 mag 2018, 19:12 Pat Ferrel ha scritto:
> Here is a discussion of how we used it for tuning with multiple input
> types:
>
Here is a discussion of how we used it for tuning with multiple input types:
https://developer.ibm.com/dwblog/2017/mahout-spark-correlated-cross-occurences/
We used video likes, dislikes, and video metadata to increase our MAP@k by 26%
eventually. So this was mainly an exercise in incorporating
You can if you want but we have external tools for the UR that are much
more flexible. The UR has tuning that can’t really be covered by the built
in API. https://github.com/actionml/ur-analysis-tools They do MAP@k as well
as creating a bunch of other metrics and comparing different types of input
hi all, i successfully trained a universal recommender but i don't know how
to evaluate the model.
Is there a recommended way to do that?
I saw that *predictionio-template-recommender* actually has
the Evaluation.scala file which uses the class *PrecisionAtK *for the
metrics.
Should i use this
Hi all,
to elaborate on these cases, the purpose is to create a UR for the cases of:
1. “User who Viewed this item also Viewed”
2. “User who Bought this item also Bought”
3. “User who Viewed this item also Bought ”
while having Events of Buying and Viewing a product.
I would