Do you really need exactly one segment? Or would, say, 5 be good enough?
You see where this is going, set maxsegments to 5 and maybe be able to get
some parallelization...

On Fri, Nov 2, 2018, 14:17 Dawid Weiss <dawid.we...@gmail.com wrote:

> Thanks for chipping in, Toke. A ~1TB index is impressive.
>
> Back of the envelope says reading & writing 900GB in 8 hours is
> 2*900GB/(8*60*60s) = 64MB/s. I don't remember the interface for our
> SSD machine, but even with SATA II this is only ~1/5th of the possible
> fairly sequential IO throughput. So for us at least, NVMe drives are
> not needed to have single-threaded CPU as bottleneck.
>
> The mileage will vary depending on the CPU -- if it can merge the data
> from multiple files at ones fast enough then it may theoretically
> saturate the bandwidth... but I agree we also seem to be CPU bound on
> these N-to-1 merges, a regular SSD is enough.
>
> > And +1 to the issue BTW.
>
> I agree. Fine-grained granularity here would be a win even in the
> regular "merge is a low-priority citizen" case. At least that's what I
> tend to think. And if there are spare CPUs, the gain would be
> terrific.
>
> Dawid
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>
>

Reply via email to