We recently helped a client do this and actually got higher relevance than scores that had been "fought for". That doesn't mean your scores will fare similarly, but I really think that the benefit of getting content signals into the same framework really outweighed any cleverness on the part of the handling of the traditional collaborative signals.
LLR scores should essentially never be used as weights, but rather should only be used as filters. I had several examples of this in my dissertation with the most notable being the way that the document routing using LLR as a filter worked better than the learned scores for the same task. Ironically, the LLR-filter ran on the document retrieval version of the routing system that it beat. Whether there would have been a better way to use LLR as a filter and compute sophisticated weights for the surviving terms isn't a question I asked and lots of water has flowed under the bridge since then. There are have been lots of replications of LLR-is-better-for-filtering result over the years and, as far as I know, no refutations. On Tue, Nov 6, 2012 at 12:42 PM, Johannes Schulte < [email protected]> wrote: > Maybe I'll try it out to throw the scores away we fought so hard for. > You're right, mixing vector space model score and LLR is questionable > without more sophisticated methods. > Thanks for the answers! > > > > > > On Tue, Nov 6, 2012 at 5:44 PM, Ted Dunning <[email protected]> wrote: > > > On Mon, Nov 5, 2012 at 9:16 PM, Johannes Schulte < > > [email protected] > > > wrote: > > > > > > > > is it possible you are mixing up payloads and stored fields? The latter > > > ones are not indexed and can only be used for the top n results. Maybe > > > we're talking about different things.. > > > > > > > I think I did mix these up. I haven't been active with Lucene for some > > time. > > > > > > > With the question of how to include the similarities I was actually > > asking > > > for the way to include the scores of say a LLR value into an index. Do > > you > > > just take the top x related items and throw the similarity score away? > > > > > > > LLR is not a good score for weighting. It is an excellent score for > > filtering. So yes, I just take the top few hundred related items and > throw > > away the similarity score. > > > > Sebastian has demonstrated that trimming the related objects this way has > > no perceptible effects, but if you have content relations as well, you > get > > even more assurance that you will get some kind of reasonable > > recommendations. > > > > > > > As for the performance: Yes, sorry, that was a little bragging and not > > > really informative :) . > > > > > > > Very informative actually. The performance is what made it clear that I > > was confused. > > >
