[ 
https://issues.apache.org/jira/browse/CASSANDRA-14405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558633#comment-16558633
 ] 

Alex Petrov edited comment on CASSANDRA-14405 at 7/26/18 5:23 PM:
------------------------------------------------------------------

First of all, awesome work Ariel and Blake. Patches look great and this whole 
project looks extremely impressive, was a big pleasure to review it.

I've pushed reviewed branch with some changes and fixes 
[here|https://github.com/aweisberg/cassandra/pull/3], waiting for a CI run. 
I've also written quite a few dtests and they can be found 
[here|https://github.com/ifesdjeen/cassandra-dtest/tree/transient-rw-v1].

I also have some more general remarks:
  * I'd propose to leave everything repair-related out of this patch for now, 
like changes to 
[PendingRepairManager|https://github.com/aweisberg/cassandra/pull/3/files#diff-93e6fa14f908d0ce3c24d56fbf484ba3R33]
 for example. We'll need to do even more repair-related work, both on streaming 
and cleanup side, and we'll be able to test it better if we concentrate at one 
thing at a time.
  * the way {{AbstractWriteResponseHandler}} is currently implemented and 
indirection in {{getInitialRecipients}} that would actually initialise a 
speculation context that will later be used in {{maybeTryAdditionalReplicas}} 
is difficult to see from the API. I understand that currently response handler 
is an abstract class and this is most likely why it was not possible to 
implement it in a more direct way, like in a wrapper class similar to how 
batches are done in storage proxy. I'd try to remove hidden dependency between 
the two methods or abstract them into an independent entity.
  * we need more dtests for speculative execution on read path.
  * it seems that {{BlockingReadRepairs}} class might be redundant. Read/repair 
hierarchy is already quite complex and having an extra class with similar name 
might make it harder to find things.
  * we now have a mechanism of sending more data requests 
{{maybeSendAdditionalDataRequests}}, and later we can also send additional 
repairs in {{maybeSendAdditionalRepairs}}. In this case we're taking more 
candidate replicas from the ones that we haven't yet seen and send updates to 
them. Since this doesn't seem to be a transient replication-related. Why do we 
need this mechanism and why can't we rely on previously existing repair paths?



was (Author: ifesdjeen):
First of all, awesome work Ariel and Blake. Patches look great and this whole 
project looks extremely impressive, was a big pleasure to review it.

I've pushed reviewed branch with some changes and fixes 
[here|https://github.com/aweisberg/cassandra/pull/3], waiting for a CI run. 

I also have some more general remarks:
  * I'd propose to leave everything repair-related out of this patch for now, 
like changes to 
[PendingRepairManager|https://github.com/aweisberg/cassandra/pull/3/files#diff-93e6fa14f908d0ce3c24d56fbf484ba3R33]
 for example. We'll need to do even more repair-related work, both on streaming 
and cleanup side, and we'll be able to test it better if we concentrate at one 
thing at a time.
  * the way {{AbstractWriteResponseHandler}} is currently implemented and 
indirection in {{getInitialRecipients}} that would actually initialise a 
speculation context that will later be used in {{maybeTryAdditionalReplicas}} 
is difficult to see from the API. I understand that currently response handler 
is an abstract class and this is most likely why it was not possible to 
implement it in a more direct way, like in a wrapper class similar to how 
batches are done in storage proxy. I'd try to remove hidden dependency between 
the two methods or abstract them into an independent entity.
  * we need more dtests for speculative execution on read path.
  * it seems that {{BlockingReadRepairs}} class might be redundant. Read/repair 
hierarchy is already quite complex and having an extra class with similar name 
might make it harder to find things.
  * we now have a mechanism of sending more data requests 
{{maybeSendAdditionalDataRequests}}, and later we can also send additional 
repairs in {{maybeSendAdditionalRepairs}}. In this case we're taking more 
candidate replicas from the ones that we haven't yet seen and send updates to 
them. Since this doesn't seem to be a transient replication-related. Why do we 
need this mechanism and why can't we rely on previously existing repair paths?


> Transient Replication: Metadata refactor
> ----------------------------------------
>
>                 Key: CASSANDRA-14405
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-14405
>             Project: Cassandra
>          Issue Type: Sub-task
>          Components: Core, Distributed Metadata, Documentation and Website
>            Reporter: Ariel Weisberg
>            Assignee: Blake Eggleston
>            Priority: Major
>             Fix For: 4.0
>
>
> Add support to CQL and NTS for configuring keyspaces to have transient 
> replicas.
> Add syntax allowing a keyspace using NTS to declare some replicas in each DC 
> as transient.
> Implement metadata internal to the DB so that it's possible to identify what 
> replicas are transient for a given token or range.
> Introduce Replica which is an InetAddressAndPort and a boolean indicating 
> whether the replica is transient. ReplicatedRange which is a wrapper around a 
> Range that indicates if the range is transient.
> Block altering of keyspaces to use transient replication if they already 
> contain MVs or 2i.
> Block the creation of MV or 2i in keyspaces using transient replication.
> Block the creation/alteration of keyspaces using transient replication if the 
> experimental flag is not set.
> Update web site, CQL spec, and any other documentation for the new syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to