[ 
https://issues.apache.org/jira/browse/LUCENE-6294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340544#comment-14340544
 ] 

Shikhar Bhushan commented on LUCENE-6294:
-----------------------------------------

This is great. I saw some improvements when testing LUCENE-5299 with the 
addition of a configurable parallelism throttle at the search request level 
using a semaphore, that might be useful to have here too. I.e. being able to 
cap how many segments are concurrently searched. That can help ensure resources 
for concurrent search requests, or reduce context switching if using an 
unbounded pool.

> Generalize how IndexSearcher parallelizes collection execution
> --------------------------------------------------------------
>
>                 Key: LUCENE-6294
>                 URL: https://issues.apache.org/jira/browse/LUCENE-6294
>             Project: Lucene - Core
>          Issue Type: Improvement
>            Reporter: Adrien Grand
>            Assignee: Adrien Grand
>            Priority: Trivial
>             Fix For: Trunk, 5.1
>
>         Attachments: LUCENE-6294.patch
>
>
> IndexSearcher takes an ExecutorService that can be used to parallelize 
> collection execution. This is useful if you want to trade throughput for 
> latency.
> However, this executor service will only be used if you search for top docs. 
> In that case, we will create one collector per slide and call TopDocs.merge 
> in the end. If you use search(Query, Collector), the executor service will 
> never be used.
> But there are other collectors that could work the same way as top docs 
> collectors, eg. TotalHitCountCollector. And maybe also some of our users' 
> collectors. So maybe IndexSearcher could expose a generic way to take 
> advantage of the executor service?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to