[
https://issues.apache.org/jira/browse/LUCENE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14330411#comment-14330411
]
Robert Muir commented on LUCENE-6276:
-------------------------------------
{quote}
I'm curious if you already have concrete ideas for the match costs of our
existing queries?
{quote}
See above in the description. we know the average number of positions per doc
(totalTermFreq/docFreq) and so on. So we can compute the amortized cost of
reading one position, and its easy from there.
{quote}
Maybe it should not only measure the cost of the operation but also how likely
it is to match?
{quote}
I don't agree. You can already get this with
Scorer.getApproximation().cost()/Scorer.cost().
> Add matchCost() api to TwoPhaseDocIdSetIterator
> -----------------------------------------------
>
> Key: LUCENE-6276
> URL: https://issues.apache.org/jira/browse/LUCENE-6276
> Project: Lucene - Core
> Issue Type: Improvement
> Reporter: Robert Muir
>
> We could add a method like TwoPhaseDISI.matchCost() defined as something like
> estimate of nanoseconds or similar.
> ConjunctionScorer could use this method to sort its 'twoPhaseIterators' array
> so that cheaper ones are called first. Today it has no idea if one scorer is
> a simple phrase scorer on a short field vs another that might do some geo
> calculation or more expensive stuff.
> PhraseScorers could implement this based on index statistics (e.g.
> totalTermFreq/maxDoc)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]