If the output of the recommenders are estimated ratings then they are
comparable. You can take the union of all top N lists. Then ask each
recommended for an estimated rating for each it did not score already.
Average the ratings and rank on that or perhaps average minus standard
deviation.

Most recommenders based on rating do this. Those not based on rating don't
and it is not necessarily true that values are meaningfully comparable.
Then you would have to make up a comparable score based on rank. I would
use .5 for first, .25 for second, etc. Then follow the process above.

Sean
 On Sep 19, 2012 3:49 PM, "yamo93" <[email protected]> wrote:

> Hi,
>
> I try to make hybrid recommendations. So, I run different recommenders
> (with different similarity algorithms) and I want to mix results (each algo
> returns a list of recommended items). It seems to be known as Mixed Hybrid
> in the Burke Taxonomy.
>
> I thought about a simple transposition in the range 0..1 for each list of
> recommended items and to compute the average.
>
> But the results are not necessarily distributed linearly (by ex. Cosine)
>
> Which are the best math functions to do this ?
>
> Is there an existing impl. in mahout or in another framework ?
>
> Thanks for your help,
> Yann.
>

Reply via email to