On Fri, Jul 11, 2014 at 7:35 PM, Kireet Reddy <[email protected]> wrote:

> The problem reappeared. We did some tests today around copying a large
> file on the nodes to test i/o throughput. On the loaded node, the copy was
> really slow, maybe 30x slower. So it seems your suspicion around something
> external interfering with I/O was right in the end even though nothing else
> is running on the machines. We will investigate our setup further but this
> doesn't seem like a lucene/elasticsearch issue in the end.
>

Hmm but ES was still running on the node?  So it could still be something
about ES/Lucene that's putting heaving IO load on the box?


> For the index close, I didn't issue any command, elasticsearch seemed to
> do that on its own. The code is in IndexingMemoryController. The triggering
> event seems to be the ram buffer size change, this triggers a call to
> InternalEngine.updateIndexingBufferSize() which then calls flush with type
> NEW_WRITER. That seems to close the lucene IndexWriter.
>

Ahh, thanks for the clarification, yes ES sometimes closes & opens a new
writer to make "non-live" settings changes take effect. However, changing
RAM buffer size for indexing is a live setting so it should not require the
close/open yet indeed (InternalEngine.updateIndexingBufferSize) it does ...
I'll dig.

Mike McCandless

http://blog.mikemccandless.com

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAD7smRdyxXGMvQZCVBMqgEWoM2ndDfKy1%3DkrV04F7S4fstQjqA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to