[ 
https://issues.apache.org/jira/browse/CASSANDRA-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12881806#action_12881806
 ] 

Kelvin Kakugawa commented on CASSANDRA-1072:
--------------------------------------------

Handling CL > ONE:
My thoughts would be to send a write to multiple nodes (but, still, for one 
node id).  And, have a node (that's not responsible for that node id) still 
aggregate the counts it has for the given node id.  i.e. it would still help 
nodes "catch up" to that node id's total count.  The big caveat would be that 
the _initial_ write path would need to be special-cased.

thrift API:
Yes, you're right.  And, right now, the binary context is primary useful for 
debugging.  The thrift interface isn't very explicit.  It's necessary for 
vector clocks, 580, but not very useful for 1072 / 1210.

RRR.resolveSuperset():
As you suspected, it's not an efficiency-motivated modification.  If cloneMe() 
is used, there's the potential to aggregate a given node id's counts an extra 
time (from that initial cloneMe() call).

1210 inclusion:
It's a relatively distinct extension, so we figured that we could wait for 1072 
to go through, first.

coding style--avoiding else:
Yeah, it's personal preference.  I hate indents, so I consciously avoid else.

> Increment counters
> ------------------
>
>                 Key: CASSANDRA-1072
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1072
>             Project: Cassandra
>          Issue Type: Sub-task
>          Components: Core
>            Reporter: Johan Oskarsson
>            Assignee: Kelvin Kakugawa
>         Attachments: CASSANDRA-1072-2.patch, CASSANDRA-1072.patch
>
>
> Break out the increment counters out of CASSANDRA-580. Classes are shared 
> between the two features but without the plain version vector code the 
> changeset becomes smaller and more manageable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to