Ahh, thanks for pointing that out! 

Lets move this conversation to the Github issue, as I think we can be more 
productive there

https://github.com/elasticsearch/elasticsearch/issues/7572#issuecomment-60350681

On Thursday, October 23, 2014 11:06:06 PM UTC-7, Shikhar Bhushan wrote:
>
> Yes, this is the 2nd issue I mentioned, where ES will pick basically any 
> replica as primary without consideration to which one might be more 
> 'up-to-date'
>
> On Fri, Oct 24, 2014 at 3:57 AM, Evan Tahler <[email protected] 
> <javascript:>> wrote:
>
>> Interesting!
>>
>> However, the write *may not* be the cause of the data loss here.  Even if 
>> there was no write while A and B are down, would the recovery process have 
>> happened the same way?  In some further tests, it still looks like C would 
>> have overwritten all the data in A and B when they rebooted.   
>>
>> This type of error is easily triggered by garbage collection with large 
>> data sets, and a server becoming unresponsive for too long. (perhaps the 
>> cluster kicks out the unresponsive node, or a supervisor restarts the 
>> application)
>>
>> On Thursday, October 23, 2014 12:59:00 PM UTC-7, Shikhar Bhushan wrote:
>>>
>>> Very interesting. The default 'write consistency level' with 
>>> Elasticsearch is QUORUM, i.e. verify a quorum of replicas for a shard are 
>>> available before processing a write for it. In this case you were just left 
>>> with 1 replica, C, and a write happened. So you would think that it should 
>>> not go through since 2 replicas would be required for quorum. However: 
>>> https://github.com/elasticsearch/elasticsearch/issues/6482. I think 
>>> this goes to show this is a real, not a hypothetical problem!
>>>
>>> But guess what? *Even if this were fixed, and a write to C never 
>>> happened: *it is still possible that once A & B were back, C could be 
>>> picked as primary and clobber data. See: https://github.com/
>>> elasticsearch/elasticsearch/issues/7572#issuecomment-59983759
>>>
>>> On Thu, Oct 23, 2014 at 7:48 PM, Evan Tahler <[email protected]> wrote:
>>>
>>>> Bump?  I would love to hear some thoughts on this flow, and if there 
>>>> are any suggestions on how to mitigate it (other than replicating all data 
>>>> to all nodes).
>>>>
>>>> Thanks! 
>>>>
>>>>
>>>> On Tuesday, October 14, 2014 3:52:31 PM UTC-7, Evan Tahler wrote:
>>>>>
>>>>> Hi Mailing List!  I'm a first-time poster, and a long time reader.
>>>>>
>>>>> We recently had a crash in our ES (1.3.1 on Ubuntu) cluster which 
>>>>> caused us to loose a significant volume of data.  I have a "theory" on 
>>>>> what 
>>>>> happened to cause this, and I would love to hear your opinions on this, 
>>>>> and 
>>>>> if you have any suggestions to mitigate it.
>>>>>
>>>>> Here is a simplified play-by-play:
>>>>>
>>>>>
>>>>>    1. Cluster has 3 data nodes, A, B, and C.  The index has 10 
>>>>>    shards.  The index has a replica count of 1, so A is the master and B 
>>>>> is a 
>>>>>    replica.  C is doing nothing.  Re-allocation of indexes/shards is 
>>>>> enabled.  
>>>>>    2. A crashes.  B takes over as master, and then starts 
>>>>>    transferring data to C as a new replica. 
>>>>>    3. B crashes.  C is now master with an impartial dataset. 
>>>>>    4. There is a write to the index.
>>>>>    5. A and B finally reboot, and they are told that they are now 
>>>>>    stale (as C had a write while they were away).  Both A and B delete 
>>>>> their 
>>>>>    local data.  A is chosen to be the new replica and re-sync from C.  
>>>>>    6. ... all the data A and B had which C never got is lost forever.
>>>>>    
>>>>>
>>>>> Is the above situation scenario possible?  If it is, it seems like the 
>>>>> default behavior of ES might be better to not reallocate in this 
>>>>> scenario?  
>>>>> This would have caused the write in step #4 to fail, but in our use case, 
>>>>> that is preferable to data loss. 
>>>>>
>>>>>  -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "elasticsearch" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>> To view this discussion on the web visit https://groups.google.com/d/
>>>> msgid/elasticsearch/58e98223-c036-41e2-b53c-265343fa3173%
>>>> 40googlegroups.com 
>>>> <https://groups.google.com/d/msgid/elasticsearch/58e98223-c036-41e2-b53c-265343fa3173%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/5b3c6605-da27-4119-8f1b-6fdcf43b404d%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/elasticsearch/5b3c6605-da27-4119-8f1b-6fdcf43b404d%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7a42853d-0892-4c0d-ab72-9874ee390af9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to