I've managed to build my own Similarity class, plug it in, and use Explain to convince myself that I am, indeed, getting the correct weightings that I desire. My test case documents are yielding precisely the intermediate values needed for alternate scoring.
There's just one thing... When I do an .explain(), I'm getting values back for the fieldNorm() that are improperly biasing my scores. According to http://www.mail-archive.com/[EMAIL PROTECTED]/msg06275.html, this value is actually computed at index time. Indeed, that is the case, for if I generate a new index using my custom Similarity class, the bias disappears and all is right with the world. However, I'm not exactly thrilled at the prospect of maintaining a second index. Recall from the initial message that users will be toggling between the standard scoring and the alternative. And while, yes, I know that this value is precomputed and stored in the index, what I'd like to be able to do is simply ignore it. Somewhere the code that computes that big huge scoring equation has to pull that value and use it. I figure if I override -that-, I can simply ignore the value and treat fieldNorm() as 1 when the custom version is used. Only problem is, I'm not sure if this is a property set somewhere, a method override in a replacement class, or some more brain surgery on my one-off version of Lucene. Gurus, may I approach? -wls --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]