[
https://issues.apache.org/jira/browse/LUCENE-8727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16889970#comment-16889970
]
Atri Sharma commented on LUCENE-8727:
-------------------------------------
bq. we will have to skip all these docs with smaller doc Ids even if they have
the same scores as docs with higher doc Ids and should be selected instead.
That should be avoidable, since we will need a custom PQ implementation anyways
if we decided to share the queue, so the PQ can tie break the other way round
on doc IDs. One advantage of sharing PQ is that we can skip the merge process
during reduce call of the CollectorManager.
I am hesitant to introduce a synchronized block to the collector level
collection mechanism -- it has a potential of blowing up in our face and
becoming a performance bottleneck.
I am curious about if we should simply have both versions -- sharing the PQ/min
score and the CollectorManager which allows callbacks which are invoked at
regular intervals by the dependent Collectors. The former can work well with
lesser number of slices, while the latter can work well with a large number of
slices.
> IndexSearcher#search(Query,int) should operate on a shared priority queue
> when configured with an executor
> ----------------------------------------------------------------------------------------------------------
>
> Key: LUCENE-8727
> URL: https://issues.apache.org/jira/browse/LUCENE-8727
> Project: Lucene - Core
> Issue Type: Improvement
> Reporter: Adrien Grand
> Priority: Minor
>
> If IndexSearcher is configured with an executor, then the top docs for each
> slice are computed separately before being merged once the top docs for all
> slices are computed. With block-max WAND this is a bit of a waste of
> resources: it would be better if an increase of the min competitive score
> could help skip non-competitive hits on every slice and not just the current
> one.
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]