[
https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15046226#comment-15046226
]
Yu Li commented on HBASE-14004:
-------------------------------
bq. My idea may be more fast because it needn't send request to zk
Considering the fail over case (like RS crash), I guess we need to persist the
acked length to somewhere like zk, or else we will still replicate the
non-acked data to slave cluster when recover?
bq. We are having the risk of losing data when we just flush a MemStore into
HFile
I believe HBASE-5954 tried to resolve the same problem, and would suggest to
pay a visit there. :-)
> [Replication] Inconsistency between Memstore and WAL may result in data in
> remote cluster that is not in the origin
> -------------------------------------------------------------------------------------------------------------------
>
> Key: HBASE-14004
> URL: https://issues.apache.org/jira/browse/HBASE-14004
> Project: HBase
> Issue Type: Bug
> Components: regionserver
> Reporter: He Liangliang
> Priority: Critical
> Labels: replication, wal
>
> Looks like the current write path can cause inconsistency between
> memstore/hfile and WAL which cause the slave cluster has more data than the
> master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already
> (may partially) transported to the DNs which finally get persisted. As a
> result, the handler will rollback the Memstore and the later flushed HFile
> will also skip this record.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)