Reading more carefully, it could actually be either way: quorum requires
that a majority of nodes complete and ack the write but still aims to write
to RF nodes (with the last replicate either written immediately or
eventually via hints or repairs). So, in the scenario outlined the replica
may or many not have made its way to the third node by the time the first
two replicas are lost. If there is a replica on the third node it can be
recovered to the other two nodes by either rebuild (actually replace) or
repair.

Cheers
Ben

---


*Ben Slater**Chief Product Officer*

<https://www.instaclustr.com/platform/>

<https://www.facebook.com/instaclustr>   <https://twitter.com/instaclustr>
<https://www.linkedin.com/company/instaclustr>

Read our latest technical blog posts here
<https://www.instaclustr.com/blog/>.

This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
and Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.


On Fri, 3 May 2019 at 09:33, Avinash Mandava <avin...@vorstella.com> wrote:

> In scenario 2 it's lost, if both nodes die and get replaced entirely
> there's no history anywhere that the write ever happened, as it wouldn't be
> in commitlog, memtable, or sstable in node 3. Surviving that failure
> scenario of two nodes with same data simultaneously failing requires upping
> CL or RF, or spreading across 3 racks, if the situation you're trying to
> avoid is rack failure (which im guessing it is from the question setup)
>
> On Thu, May 2, 2019 at 2:25 PM Ben Slater <ben.sla...@instaclustr.com>
> wrote:
>
>> In scenario 2, if the row has been written to node 3 it will be replaced
>> on the other nodes via rebuild or repair.
>>
>> ---
>>
>>
>> *Ben Slater**Chief Product Officer*
>>
>> <https://www.instaclustr.com/platform/>
>>
>> <https://www.facebook.com/instaclustr>
>> <https://twitter.com/instaclustr>
>> <https://www.linkedin.com/company/instaclustr>
>>
>> Read our latest technical blog posts here
>> <https://www.instaclustr.com/blog/>.
>>
>> This email has been sent on behalf of Instaclustr Pty. Limited
>> (Australia) and Instaclustr Inc (USA).
>>
>> This email and any attachments may contain confidential and legally
>> privileged information.  If you are not the intended recipient, do not copy
>> or disclose its content, but please reply to this email immediately and
>> highlight the error to the sender and then immediately delete the message.
>>
>>
>> On Fri, 3 May 2019 at 00:54, Fd Habash <fmhab...@gmail.com> wrote:
>>
>>> C*: 2.2.8
>>>
>>> Write CL = LQ
>>>
>>> Kspace RF = 3
>>>
>>> Three racks
>>>
>>>
>>>
>>> A write gets received by node 1 in rack 1 at above specs. Node 1 (rack1)
>>> & node 2 (rack2)  acknowledge it to the client.
>>>
>>>
>>>
>>> Within some unit of time, node 1 & 2 die. Either ….
>>>
>>>    - Scenario 1: C* process death: Row did not make it to sstable (it
>>>    is in commit log & was in memtable)
>>>    - Scenario 2: Node death: row may be have made to sstable, but nodes
>>>    are gone (will have to bootstrap to replace).
>>>
>>>
>>>
>>> Scenario 1: Row is not lost because once C* is restarted, commit log
>>> should replay the mutation.
>>>
>>>
>>>
>>> Scenario 2: row is gone forever? If these two nodes are replaced via
>>> bootstrapping, will they ever get the row back from node 3 (rack3) if the
>>> write ever made it there?
>>>
>>>
>>>
>>>
>>>
>>> ----------------
>>> Thank you
>>>
>>>
>>>
>>
>
> --
> www.vorstella.com
> 408 691 8402
>

Reply via email to