Another data point, replica shard that's trying to initialized keeps 
growing past the size of its master counterpart. 

master:   1.3G    1
copy #1:  24G     1
copy #2:  23G     1

total index size is 6.28G, something is not right here...

On Wednesday, September 3, 2014 1:10:30 PM UTC-7, David Kleiner wrote:
>
> Thanks Mark, it's a SATA RAID5 volume, ext4 fs, with following mount 
> options:
>
> /dev/sdb1 on /acc type ext4 
> (rw,noatime,data=writeback,barrier=0,nobh,errors=remount-ro)
>
> and journal enabled.   
>
> Perhaps I'm being too aggressive with squeezing performance out this fs? 
>
> On Tuesday, September 2, 2014 10:03:21 PM UTC-7, Mark Walkom wrote:
>>
>> Have you checked your hardware status as the error mentioned? I'd also do 
>> a FS check to be safe.
>>
>> Regards,
>> Mark Walkom
>>
>> Infrastructure Engineer
>> Campaign Monitor
>> email: [email protected]
>> web: www.campaignmonitor.com
>>  
>>
>> On 3 September 2014 14:58, David Kleiner <[email protected]> wrote:
>>
>>> Greetings,
>>>
>>> I tried to overcome slowly recovering replica set, changed the number of 
>>> replicas on index to 0, then to 1, getting this exception:
>>>
>>>
>>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------
>>> [2014-09-02 23:51:59,738][WARN ][indices.recovery         ] [Salvador 
>>> Dali] [...-2014.08.29][1] File corruption on recovery name 
>>> [_40d_es090_0.pos], length [11345418], checksum [ekoi4m], writtenBy 
>>> [LUCENE_4_9] local checksum OK
>>> org.apache.lucene.index.CorruptIndexException: checksum failed (hardware 
>>> problem?) : expected=ekoi4m actual=1pdwf09 (resource=name 
>>> [_40d_es090_0.pos], length [11345418], checksum [ekoi4m], writtenBy 
>>> [LUCENE_4_9])
>>>         at 
>>> org.elasticsearch.index.store.Store$VerifyingIndexOutput.readAndCompareChecksum(Store.java:684)
>>>         at 
>>> org.elasticsearch.index.store.Store$VerifyingIndexOutput.writeBytes(Store.java:696)
>>>         at 
>>> org.elasticsearch.indices.recovery.RecoveryTarget$FileChunkTransportRequestHandler.messageReceived(RecoveryTarget.java:589)
>>>         at 
>>> org.elasticsearch.indices.recovery.RecoveryTarget$FileChunkTransportRequestHandler.messageReceived(RecoveryTarget.java:533)
>>>         at 
>>> org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:275)
>>>         at 
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>         at 
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:745)
>>>
>>>
>>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------
>>>
>>> Any pointers? 
>>>
>>>
>>>  -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to [email protected].
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/cba135a4-7838-4ad5-b56c-439823f7653b%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/elasticsearch/cba135a4-7838-4ad5-b56c-439823f7653b%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d5f6465a-10e5-42c5-8f70-2662d08de545%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to