Thanks for the clarifications. Another point I thought about is the disk efficiency of a serving a random-IO. Many parallel threads could end-up hitting just one or two disks in the cluster…
Think I can skip it safely for my work-loads. -- Ravi On Fri, Feb 6, 2015 at 3:09 PM, Aaron McCurry <[email protected]> wrote: > The ServiceExecutor (thread pool) put inside the IndexSearcher was an > attempt at making the segments search in parallel when available. However > there is a limitation in Lucene that does not allow segment parallel > searches when you are using Collectors. > > > https://github.com/apache/lucene-solr/blob/lucene_solr_4_3_0/lucene/core/src/java/org/apache/lucene/search/IndexSearcher.java#L595 > > We override this method to allow for Tracing: > > > https://github.com/apache/incubator-blur/blob/master/blur-core/src/main/java/org/apache/blur/server/IndexSearcherCloseableBase.java#L46 > > and here: > > > https://github.com/apache/incubator-blur/blob/master/blur-core/src/main/java/org/apache/blur/server/IndexSearcherCloseableSecureBase.java#L51 > > I agree that if you are already running a lot of shards per server that if > we were to enhance Lucene to allow for parallel searching of segments it > could become counter productive. I have seen underutilized systems that > could take advantage of the parallel segment search, so as with any feature > like this, it depends. :-) > > Aaron > > On Fri, Feb 6, 2015 at 2:39 AM, Ravikumar Govindarajan < > [email protected]> wrote: > > > Blur by default uses a SearchExecutor for IndexSearcher. I believe lucene > > helps searching segments of a single shard in parallel. > > > > Our previous index was built on a lower version of lucene where such a > > feature was absent and we ran sequential search per shard only… > > > > What is the general recommendation for blur? Is it advisable to use the > > SearchExecutor? What will happen when there are many parallel queries for > > different shards. Will SearchExecutor become a bottle-neck? > > > > Any help is much appreciated... > > > > -- > > Ravi > > >
