[ 
https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15049009#comment-15049009
 ] 

stack commented on HBASE-14004:
-------------------------------

Not sure I understand the question. Replication reads the WAL in order and 
sends to the remote cluster all it has read in order. When it gets to the 
remote side, all should be 'applied' in order. How we get your scenarios #2, 
#3, etc., above?

bq. In write path, some entries hflushed but not acked, so we close old WAL 
with acked length, and try to write this entries into new WAL, then RS crashed. 
Slave maybe has replicate this entries but RS after recovery in master will 
lost them, right?

Isn't this essentially the description that leads off this JIRA?

> [Replication] Inconsistency between Memstore and WAL may result in data in 
> remote cluster that is not in the origin
> -------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-14004
>                 URL: https://issues.apache.org/jira/browse/HBASE-14004
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: He Liangliang
>            Priority: Critical
>              Labels: replication, wal
>
> Looks like the current write path can cause inconsistency between 
> memstore/hfile and WAL which cause the slave cluster has more data than the 
> master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already  
> (may partially) transported to the DNs which finally get persisted. As a 
> result, the handler will rollback the Memstore and the later flushed HFile 
> will also skip this record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to