Re: UR evaluation

2018-05-10 Thread Pat Ferrel
Exactly, ranking is the only task of a recommender. Precision is not
automatically good at that but something like MAP@k is.


From: Marco Goldin  
Date: May 10, 2018 at 10:09:22 PM
To: Pat Ferrel  
Cc: user@predictionio.apache.org 

Subject:  Re: UR evaluation

Very nice article. And it gets much clearer the importance of treating the
recommendation like a ranking task.
Thanks

Il gio 10 mag 2018, 19:12 Pat Ferrel  ha scritto:

> Here is a discussion of how we used it for tuning with multiple input
> types:
> https://developer.ibm.com/dwblog/2017/mahout-spark-correlated-cross-occurences/
>
> We used video likes, dislikes, and video metadata to increase our MAP@k
> by 26% eventually. So this was mainly an exercise in incorporating data.
> Since this research was done we have learned how to better tune this type
> of situation but that’s a long story fit for another blog post.
>
>
> From: Marco Goldin  
> Reply: user@predictionio.apache.org 
> 
> Date: May 10, 2018 at 9:54:23 AM
> To: Pat Ferrel  
> Cc: user@predictionio.apache.org 
> 
> Subject:  Re: UR evaluation
>
> thank you very much, i didn't see this tool, i'll definitely try it.
> Clearly better to have such a specific instrument.
>
>
>
> 2018-05-10 18:36 GMT+02:00 Pat Ferrel :
>
>> You can if you want but we have external tools for the UR that are much
>> more flexible. The UR has tuning that can’t really be covered by the built
>> in API. https://github.com/actionml/ur-analysis-tools They do MAP@k as
>> well as creating a bunch of other metrics and comparing different types of
>> input data. They use a running UR to make queries against.
>>
>>
>> From: Marco Goldin  
>> Reply: user@predictionio.apache.org 
>> 
>> Date: May 10, 2018 at 7:52:39 AM
>> To: user@predictionio.apache.org 
>> 
>> Subject:  UR evaluation
>>
>> hi all, i successfully trained a universal recommender but i don't know
>> how to evaluate the model.
>>
>> Is there a recommended way to do that?
>> I saw that *predictionio-template-recommender* actually has
>> the Evaluation.scala file which uses the class *PrecisionAtK *for the
>> metrics.
>> Should i use this template to implement a similar evaluation for the UR?
>>
>> thanks,
>> Marco Goldin
>> Horizons Unlimited s.r.l.
>>
>>
>


Re: UR evaluation

2018-05-10 Thread Marco Goldin
Very nice article. And it gets much clearer the importance of treating the
recommendation like a ranking task.
Thanks

Il gio 10 mag 2018, 19:12 Pat Ferrel  ha scritto:

> Here is a discussion of how we used it for tuning with multiple input
> types:
> https://developer.ibm.com/dwblog/2017/mahout-spark-correlated-cross-occurences/
>
> We used video likes, dislikes, and video metadata to increase our MAP@k
> by 26% eventually. So this was mainly an exercise in incorporating data.
> Since this research was done we have learned how to better tune this type
> of situation but that’s a long story fit for another blog post.
>
>
> From: Marco Goldin  
> Reply: user@predictionio.apache.org 
> 
> Date: May 10, 2018 at 9:54:23 AM
> To: Pat Ferrel  
> Cc: user@predictionio.apache.org 
> 
> Subject:  Re: UR evaluation
>
> thank you very much, i didn't see this tool, i'll definitely try it.
> Clearly better to have such a specific instrument.
>
>
>
> 2018-05-10 18:36 GMT+02:00 Pat Ferrel :
>
>> You can if you want but we have external tools for the UR that are much
>> more flexible. The UR has tuning that can’t really be covered by the built
>> in API. https://github.com/actionml/ur-analysis-tools They do MAP@k as
>> well as creating a bunch of other metrics and comparing different types of
>> input data. They use a running UR to make queries against.
>>
>>
>> From: Marco Goldin  
>> Reply: user@predictionio.apache.org 
>> 
>> Date: May 10, 2018 at 7:52:39 AM
>> To: user@predictionio.apache.org 
>> 
>> Subject:  UR evaluation
>>
>> hi all, i successfully trained a universal recommender but i don't know
>> how to evaluate the model.
>>
>> Is there a recommended way to do that?
>> I saw that *predictionio-template-recommender* actually has
>> the Evaluation.scala file which uses the class *PrecisionAtK *for the
>> metrics.
>> Should i use this template to implement a similar evaluation for the UR?
>>
>> thanks,
>> Marco Goldin
>> Horizons Unlimited s.r.l.
>>
>>
>


Re: UR evaluation

2018-05-10 Thread Pat Ferrel
Here is a discussion of how we used it for tuning with multiple input types: 
https://developer.ibm.com/dwblog/2017/mahout-spark-correlated-cross-occurences/

We used video likes, dislikes, and video metadata to increase our MAP@k by 26% 
eventually. So this was mainly an exercise in incorporating data. Since this 
research was done we have learned how to better tune this type of situation but 
that’s a long story fit for another blog post.


From: Marco Goldin 
Reply: user@predictionio.apache.org 
Date: May 10, 2018 at 9:54:23 AM
To: Pat Ferrel 
Cc: user@predictionio.apache.org 
Subject:  Re: UR evaluation  

thank you very much, i didn't see this tool, i'll definitely try it. Clearly 
better to have such a specific instrument.



2018-05-10 18:36 GMT+02:00 Pat Ferrel :
You can if you want but we have external tools for the UR that are much more 
flexible. The UR has tuning that can’t really be covered by the built in API. 
https://github.com/actionml/ur-analysis-tools They do MAP@k as well as creating 
a bunch of other metrics and comparing different types of input data. They use 
a running UR to make queries against.


From: Marco Goldin 
Reply: user@predictionio.apache.org 
Date: May 10, 2018 at 7:52:39 AM
To: user@predictionio.apache.org 
Subject:  UR evaluation

hi all, i successfully trained a universal recommender but i don't know how to 
evaluate the model.

Is there a recommended way to do that?
I saw that predictionio-template-recommender actually has the Evaluation.scala 
file which uses the class PrecisionAtK for the metrics. 
Should i use this template to implement a similar evaluation for the UR?

thanks,
Marco Goldin
Horizons Unlimited s.r.l.




Re: UR evaluation

2018-05-10 Thread Pat Ferrel
You can if you want but we have external tools for the UR that are much
more flexible. The UR has tuning that can’t really be covered by the built
in API. https://github.com/actionml/ur-analysis-tools They do MAP@k as well
as creating a bunch of other metrics and comparing different types of input
data. They use a running UR to make queries against.


From: Marco Goldin  
Reply: user@predictionio.apache.org 

Date: May 10, 2018 at 7:52:39 AM
To: user@predictionio.apache.org 

Subject:  UR evaluation

hi all, i successfully trained a universal recommender but i don't know how
to evaluate the model.

Is there a recommended way to do that?
I saw that *predictionio-template-recommender* actually has
the Evaluation.scala file which uses the class *PrecisionAtK *for the
metrics.
Should i use this template to implement a similar evaluation for the UR?

thanks,
Marco Goldin
Horizons Unlimited s.r.l.


UR evaluation

2018-05-10 Thread Marco Goldin
hi all, i successfully trained a universal recommender but i don't know how
to evaluate the model.

Is there a recommended way to do that?
I saw that *predictionio-template-recommender* actually has
the Evaluation.scala file which uses the class *PrecisionAtK *for the
metrics.
Should i use this template to implement a similar evaluation for the UR?

thanks,
Marco Goldin
Horizons Unlimited s.r.l.


RE: UR: build/train/deploy once & querying for 3 use cases

2018-05-10 Thread Nasos Papageorgiou
Hi all,
to elaborate on these cases, the purpose is to create a UR for the cases of:

1.   “User who Viewed this item also Viewed”

2.   “User who Bought this item also Bought”

3.   “User who Viewed this item also Bought ”

while having Events of Buying and Viewing a product.
I would like to make some questions:

1.   On Data source Parameters, file: events.json: There is no matter
on the sequence of the events which are defined. Right?

2.   If I specify one Event Type on the “eventNames” in Algorithm
section (i.e. “view”)  and no event on the “blacklistEvents”,  is the
second Event Type (i.e. “buy”) specified on the recommended list?

3.   If I use only "user" on the query, the "item case" will not be
used for the recommendations. What is happening with the new users in that
case?   Shall I use both "user" and "item" instead?

4.Values of less than 1 in “UserBias” and “ItemBias” on the query
do not have any effect on the result.

5.Is it feasible to build/train/deploy only once, and query for all
3 use cases?


6.   How to make queries towards the different Apps because there is no
any obvious way in the query parameters or the URL?

Thank you.



*From:* Pat Ferrel [mailto:p...@occamsmachete.com]
*Sent:* Wednesday, May 09, 2018 4:41 PM
*To:* user@predictionio.apache.org; gerasimos xydas
*Subject:* Re: UR: build/train/deploy once & querying for 3 use cases



Why do you want to throw away user behavior in making recommendations? The
lift you get in purchases will be less.



There is a use case for this when you are making recommendations basically
inside a session where the user is browsing/viewing things on a hunt for
something. In this case you would want to make recs using the user history
of views but you have to build a model of purchase as the primary indicator
or you won’t get purchase recommendations and believe me recommending views
is a road to bad results. People view many things they do not buy, putting
only view behavior that lead to purchases in the model. So create a model
with purchase as the primary indicator and view as the secondary.



Once you have the model use only the user’s session viewing history in the
as the Elasticsearch query.



This is a feature on our list.




From: gerasimos xydas 

Reply: user@predictionio.apache.org 

Date: May 9, 2018 at 6:20:46 AM
To: user@predictionio.apache.org 

Subject:  UR: build/train/deploy once & querying for 3 use cases



Hello everybody,

We are experimenting with the Universal Recommender to provide
recommendations for the 3 distinct use cases below:

- Get a product recommendation based on product views
- Get a product recommendation based on product purchases
- Get a product recommendation based on previous purchases and views (i.e.
users who viewed this bought that)

The event server is fed from a single app with two types of events: "view"
and "purchase".

1. How should we customize the query to fetch results for each separate
case?
2. Is it feasible to build/train/deploy only once, and query for all 3 use
cases?


Best Regards,

Gerasimos