[ 
https://issues.apache.org/jira/browse/HBASE-17200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15707108#comment-15707108
 ] 

Gary Helmling commented on HBASE-17200:
---------------------------------------

Yeah, we saw the same thing in HBASE-16752.  There [~ashu210890] fixed the 
error handling so that the problem would be clearer to the client (replication 
source), but agree that explicitly documenting this would be good.

> Document an interesting implication of HBASE-15212
> --------------------------------------------------
>
>                 Key: HBASE-17200
>                 URL: https://issues.apache.org/jira/browse/HBASE-17200
>             Project: HBase
>          Issue Type: Bug
>          Components: documentation, Operability, Replication
>            Reporter: Andrew Purtell
>            Priority: Minor
>
> We had a Phoenix client application unfortunately batching up 1000 rows at a 
> time. Phoenix bundles mutations up only considering the row count not byte 
> count (see PHOENIX-541) so this lead to some *single WALEdits* in excess of 
> 600 MB. A cluster without max RPC size enforcement accepted them. (That may 
> be something we should fix - WALEdits that large are crazy.) Then replication 
> workers attempting to ship the monster edits from this cluster to a remote 
> cluster recently upgraded with RPC size enforcement active would see all 
> their RPC attempts rejected, because the default limit is 256 MB. 
> This is an edge case but I can see it happening in practice and taking users 
> by surprise, most likely when replicating between mixed versions. We should 
> document this in the troubleshooting section. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to