In the GenericUserBasedRecommender the concept of a neighborhood seems to be fundamental. I.e., it is a classic implementation of the kNN algorithm.
But it is not the case with the GenericItemBasedRecommender. I understand that the two approaches are not meant to be completely symmetric, but still, wouldn't it make sense, from the performance perspective, to compute items' neighborhoods first, and then use them to compute recommendations? If kNN was run on items first, then every item-item similarity would be computed once. It looks like in the GenericItemBasedRecommender each item-item similarity will be computed multiple times. (How much, depends on the data, but still.) I am wondering if anybody has any thoughts on the validity of doing item-item kNN in the context of: 1) performance, 2) quality of recommendations.
