[ 
https://issues.apache.org/jira/browse/KUDU-3023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Serbin updated KUDU-3023:
--------------------------------
    Labels: Availability consistency scalability  (was: Availability 
consistency maintainability scalability)

> Enforce consistency between maximum RPC size and maximum size of transaction
> ----------------------------------------------------------------------------
>
>                 Key: KUDU-3023
>                 URL: https://issues.apache.org/jira/browse/KUDU-3023
>             Project: Kudu
>          Issue Type: Improvement
>          Components: master, tserver
>            Reporter: Alexey Serbin
>            Assignee: Alexey Serbin
>            Priority: Major
>              Labels: Availability, consistency, scalability
>
> It seems that with the absence of constraints for the 
> {{\-\-rpc_max_message_size}} and the {{\-\-tablet_transaction_memory}} flags, 
> there might a transaction that can get into the WALs of tablet replicas, but 
> no replica can apply the transaction.
> It would be nice to clarify on this and if necessary, establish a group 
> validator to prevent such situations from happening, if possible.
> Few other points to clarify:
> * What if the data in RPC message that client sends to leader replica is 
> compressed?  Can we guarantee that corresponding Raft update sent to 
> followers after persisting the data in the leader's WAL is compressed as well?
> * How much cruft is added by converting the incoming data into a WAL entry 
> and then into a Raft update RPC message?  Is it possible for a leader replica 
> to accept an update on the tablet since it's under the max RPC size limit, 
> but be unable push corresponding Raft update message to follower replicas 
> because of some extra cruft added when converting the accepted 
> {{WriteRequestPB}} into {{ConsensusRequestPB}} containing {{ReplicateMsg}} 
> with embedded original {{WriteRequestPB}}? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to