You could do this several ways: 1) You could see whether or not users respond to 1 style of recommendations obtained through 1 type of similarity coefficient versus the others, meaning did they click on a particular recommendation obtained through Tanimoto versus loglikelihood2) You could also use something similar to DCG (http://en.wikipedia.org/wiki/Discounted_cumulative_gain) to figure out how good each algorithm is compared to another
> From: [email protected] > Date: Fri, 7 Sep 2012 18:22:47 -0400 > Subject: evaluating distributed recommendation results > To: [email protected] > > Hi, > > I'm generating item similarities and recommendations using the > distributed jobs. Is there a way I can evaluate the results? The MIA > book describes how to do this with the non-distributed recommenders, > but I can't find anything on evaluating the distributed stuff. Any > tips on doing this? > > Thanks, > Matt
