Hi all,

I am using ES 1.0.1. I am wondering how many un-used disk space needed for 
the ES's system running?

Because I ran into the error:

[2014-03-26 03:30:52,713][WARN ][index.merge.scheduler    ] [Rick Jones] 
> [qusion][1] failed to merge
> java.io.IOException: No space left on device
> at java.io.RandomAccessFile.writeBytes0(Native Method)
> at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
> at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
> at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffer(FSDirectory.java:458)
> at 
> org.apache.lucene.store.RateLimitedFSDirectory$RateLimitedIndexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
> at 
> org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(BufferedChecksumIndexOutput.java:71)
> at 
> org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:113)
> at 
> org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:102)
> at 
> org.apache.lucene.store.BufferedChecksumIndexOutput.flush(BufferedChecksumIndexOutput.java:86)
> at 
> org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:126)
> at 
> org.apache.lucene.store.BufferedChecksumIndexOutput.close(BufferedChecksumIndexOutput.java:61)
> at 
> org.elasticsearch.index.store.Store$StoreIndexOutput.close(Store.java:602)
> at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsIndexWriter.close(CompressingStoredFieldsIndexWriter.java:205)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:140)
> at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.close(CompressingStoredFieldsWriter.java:138)
> at 
> org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:318)
> at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
> at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4071)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3668)
> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
> at 
> org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:107)
> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> [2014-03-26 03:30:53,382][WARN ][index.engine.internal    ] [Rick Jones] 
> [qusion][1] failed engine



Obviously, there need to have some amount of disk during the merge.
And I think the larger index size, the more disk space needed for the merge 
operation.

Does anyone have the idea how much does it can be?

Ivan

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/272d3fc6-5dd9-4377-b847-bacbbc800fb1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to