Re: UR evaluation

2018-05-10 Thread Pat Ferrel
Exactly, ranking is the only task of a recommender. Precision is not
automatically good at that but something like MAP@k is.


From: Marco Goldin <markomar...@gmail.com> <markomar...@gmail.com>
Date: May 10, 2018 at 10:09:22 PM
To: Pat Ferrel <p...@occamsmachete.com> <p...@occamsmachete.com>
Cc: user@predictionio.apache.org <user@predictionio.apache.org>
<user@predictionio.apache.org>
Subject:  Re: UR evaluation

Very nice article. And it gets much clearer the importance of treating the
recommendation like a ranking task.
Thanks

Il gio 10 mag 2018, 19:12 Pat Ferrel <p...@occamsmachete.com> ha scritto:

> Here is a discussion of how we used it for tuning with multiple input
> types:
> https://developer.ibm.com/dwblog/2017/mahout-spark-correlated-cross-occurences/
>
> We used video likes, dislikes, and video metadata to increase our MAP@k
> by 26% eventually. So this was mainly an exercise in incorporating data.
> Since this research was done we have learned how to better tune this type
> of situation but that’s a long story fit for another blog post.
>
>
> From: Marco Goldin <markomar...@gmail.com> <markomar...@gmail.com>
> Reply: user@predictionio.apache.org <user@predictionio.apache.org>
> <user@predictionio.apache.org>
> Date: May 10, 2018 at 9:54:23 AM
> To: Pat Ferrel <p...@occamsmachete.com> <p...@occamsmachete.com>
> Cc: user@predictionio.apache.org <user@predictionio.apache.org>
> <user@predictionio.apache.org>
> Subject:  Re: UR evaluation
>
> thank you very much, i didn't see this tool, i'll definitely try it.
> Clearly better to have such a specific instrument.
>
>
>
> 2018-05-10 18:36 GMT+02:00 Pat Ferrel <p...@occamsmachete.com>:
>
>> You can if you want but we have external tools for the UR that are much
>> more flexible. The UR has tuning that can’t really be covered by the built
>> in API. https://github.com/actionml/ur-analysis-tools They do MAP@k as
>> well as creating a bunch of other metrics and comparing different types of
>> input data. They use a running UR to make queries against.
>>
>>
>> From: Marco Goldin <markomar...@gmail.com> <markomar...@gmail.com>
>> Reply: user@predictionio.apache.org <user@predictionio.apache.org>
>> <user@predictionio.apache.org>
>> Date: May 10, 2018 at 7:52:39 AM
>> To: user@predictionio.apache.org <user@predictionio.apache.org>
>> <user@predictionio.apache.org>
>> Subject:  UR evaluation
>>
>> hi all, i successfully trained a universal recommender but i don't know
>> how to evaluate the model.
>>
>> Is there a recommended way to do that?
>> I saw that *predictionio-template-recommender* actually has
>> the Evaluation.scala file which uses the class *PrecisionAtK *for the
>> metrics.
>> Should i use this template to implement a similar evaluation for the UR?
>>
>> thanks,
>> Marco Goldin
>> Horizons Unlimited s.r.l.
>>
>>
>


Re: UR evaluation

2018-05-10 Thread Marco Goldin
Very nice article. And it gets much clearer the importance of treating the
recommendation like a ranking task.
Thanks

Il gio 10 mag 2018, 19:12 Pat Ferrel <p...@occamsmachete.com> ha scritto:

> Here is a discussion of how we used it for tuning with multiple input
> types:
> https://developer.ibm.com/dwblog/2017/mahout-spark-correlated-cross-occurences/
>
> We used video likes, dislikes, and video metadata to increase our MAP@k
> by 26% eventually. So this was mainly an exercise in incorporating data.
> Since this research was done we have learned how to better tune this type
> of situation but that’s a long story fit for another blog post.
>
>
> From: Marco Goldin <markomar...@gmail.com> <markomar...@gmail.com>
> Reply: user@predictionio.apache.org <user@predictionio.apache.org>
> <user@predictionio.apache.org>
> Date: May 10, 2018 at 9:54:23 AM
> To: Pat Ferrel <p...@occamsmachete.com> <p...@occamsmachete.com>
> Cc: user@predictionio.apache.org <user@predictionio.apache.org>
> <user@predictionio.apache.org>
> Subject:  Re: UR evaluation
>
> thank you very much, i didn't see this tool, i'll definitely try it.
> Clearly better to have such a specific instrument.
>
>
>
> 2018-05-10 18:36 GMT+02:00 Pat Ferrel <p...@occamsmachete.com>:
>
>> You can if you want but we have external tools for the UR that are much
>> more flexible. The UR has tuning that can’t really be covered by the built
>> in API. https://github.com/actionml/ur-analysis-tools They do MAP@k as
>> well as creating a bunch of other metrics and comparing different types of
>> input data. They use a running UR to make queries against.
>>
>>
>> From: Marco Goldin <markomar...@gmail.com> <markomar...@gmail.com>
>> Reply: user@predictionio.apache.org <user@predictionio.apache.org>
>> <user@predictionio.apache.org>
>> Date: May 10, 2018 at 7:52:39 AM
>> To: user@predictionio.apache.org <user@predictionio.apache.org>
>> <user@predictionio.apache.org>
>> Subject:  UR evaluation
>>
>> hi all, i successfully trained a universal recommender but i don't know
>> how to evaluate the model.
>>
>> Is there a recommended way to do that?
>> I saw that *predictionio-template-recommender* actually has
>> the Evaluation.scala file which uses the class *PrecisionAtK *for the
>> metrics.
>> Should i use this template to implement a similar evaluation for the UR?
>>
>> thanks,
>> Marco Goldin
>> Horizons Unlimited s.r.l.
>>
>>
>


Re: UR evaluation

2018-05-10 Thread Pat Ferrel
Here is a discussion of how we used it for tuning with multiple input types: 
https://developer.ibm.com/dwblog/2017/mahout-spark-correlated-cross-occurences/

We used video likes, dislikes, and video metadata to increase our MAP@k by 26% 
eventually. So this was mainly an exercise in incorporating data. Since this 
research was done we have learned how to better tune this type of situation but 
that’s a long story fit for another blog post.


From: Marco Goldin <markomar...@gmail.com>
Reply: user@predictionio.apache.org <user@predictionio.apache.org>
Date: May 10, 2018 at 9:54:23 AM
To: Pat Ferrel <p...@occamsmachete.com>
Cc: user@predictionio.apache.org <user@predictionio.apache.org>
Subject:  Re: UR evaluation  

thank you very much, i didn't see this tool, i'll definitely try it. Clearly 
better to have such a specific instrument.



2018-05-10 18:36 GMT+02:00 Pat Ferrel <p...@occamsmachete.com>:
You can if you want but we have external tools for the UR that are much more 
flexible. The UR has tuning that can’t really be covered by the built in API. 
https://github.com/actionml/ur-analysis-tools They do MAP@k as well as creating 
a bunch of other metrics and comparing different types of input data. They use 
a running UR to make queries against.


From: Marco Goldin <markomar...@gmail.com>
Reply: user@predictionio.apache.org <user@predictionio.apache.org>
Date: May 10, 2018 at 7:52:39 AM
To: user@predictionio.apache.org <user@predictionio.apache.org>
Subject:  UR evaluation

hi all, i successfully trained a universal recommender but i don't know how to 
evaluate the model.

Is there a recommended way to do that?
I saw that predictionio-template-recommender actually has the Evaluation.scala 
file which uses the class PrecisionAtK for the metrics. 
Should i use this template to implement a similar evaluation for the UR?

thanks,
Marco Goldin
Horizons Unlimited s.r.l.




Re: UR evaluation

2018-05-10 Thread Pat Ferrel
You can if you want but we have external tools for the UR that are much
more flexible. The UR has tuning that can’t really be covered by the built
in API. https://github.com/actionml/ur-analysis-tools They do MAP@k as well
as creating a bunch of other metrics and comparing different types of input
data. They use a running UR to make queries against.


From: Marco Goldin <markomar...@gmail.com> <markomar...@gmail.com>
Reply: user@predictionio.apache.org <user@predictionio.apache.org>
<user@predictionio.apache.org>
Date: May 10, 2018 at 7:52:39 AM
To: user@predictionio.apache.org <user@predictionio.apache.org>
<user@predictionio.apache.org>
Subject:  UR evaluation

hi all, i successfully trained a universal recommender but i don't know how
to evaluate the model.

Is there a recommended way to do that?
I saw that *predictionio-template-recommender* actually has
the Evaluation.scala file which uses the class *PrecisionAtK *for the
metrics.
Should i use this template to implement a similar evaluation for the UR?

thanks,
Marco Goldin
Horizons Unlimited s.r.l.


UR evaluation

2018-05-10 Thread Marco Goldin
hi all, i successfully trained a universal recommender but i don't know how
to evaluate the model.

Is there a recommended way to do that?
I saw that *predictionio-template-recommender* actually has
the Evaluation.scala file which uses the class *PrecisionAtK *for the
metrics.
Should i use this template to implement a similar evaluation for the UR?

thanks,
Marco Goldin
Horizons Unlimited s.r.l.