[ 
https://issues.apache.org/jira/browse/CASSANDRA-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12706555#action_12706555
 ] 

Sandeep Tata commented on CASSANDRA-140:
----------------------------------------

Yes, agreed.

Here are the options we have for writes and reads (assuming replication = 3):

non-blocking write: Send 3 messages, return
blocking write: Send 3 messages, wait for a quorum (2) to respond *after* 
applying the writes
block-on-k write: Send 3 messages, wait for k to respond *after* applying the 
writes

Other options:

block-on-1st-endpoint: Send 3 messages, wait for the *first* endpoint to 
respond *after* applying the writes
block-on-1st-if-local: Send 3 messages, if one of the endpoints is local, wait 
for local endpoint to respond *after* applying the writes (can be faster than 
the previous one if client connects appropriately => gets session consistency 
for cheaper because weak reads will be served locally)

block-on-1st-endpoint won't give you session consistency because the 1st 
endpoint *may* change between a write and a read if there's been a failure. 
Since this failure will be transparent to  the client's session, it may read an 
old value. 

This is, of course, completely uninteresting to an app that doesn't need 
session-level read-your-writes.






> allow user to specify how many nodes to block for on reads and writes
> ---------------------------------------------------------------------
>
>                 Key: CASSANDRA-140
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-140
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Jonathan Ellis
>             Fix For: 0.4
>
>
> currently you only have block for zero (or one, on reads) or quorum.  block 
> for one (on writes), and all are also useful values.  allow user to specify 
> this as a number.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to