[
https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Duo Zhang updated HBASE-14004:
------------------------------
Description:
Looks like the current write path can cause inconsistency between
memstore/hfile and WAL which cause the slave cluster has more data than the
master cluster.
The simplified write path looks like:
1. insert record into Memstore
2. write record to WAL
3. sync WAL
4. rollback Memstore if 3 fails
It's possible that the HDFS sync RPC call fails, but the data is already (may
partially) transported to the DNs which finally get persisted. As a result, the
handler will rollback the Memstore and the later flushed HFile will also skip
this record.
==================================
This is a long lived issue. The above problem is solved by write path reorder,
as now we will sync wal first before modifying memstore. But the problem may
still exists as replication thread may read the new data before we return from
hflush. See this document for more details:
https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit#
So we need to keep a sync length in WAL and tell replication wal reader this is
limit when you read this wal file.
was:
Looks like the current write path can cause inconsistency between
memstore/hfile and WAL which cause the slave cluster has more data than the
master cluster.
The simplified write path looks like:
1. insert record into Memstore
2. write record to WAL
3. sync WAL
4. rollback Memstore if 3 fails
It's possible that the HDFS sync RPC call fails, but the data is already (may
partially) transported to the DNs which finally get persisted. As a result, the
handler will rollback the Memstore and the later flushed HFile will also skip
this record.
> [Replication] Inconsistency between Memstore and WAL may result in data in
> remote cluster that is not in the origin
> -------------------------------------------------------------------------------------------------------------------
>
> Key: HBASE-14004
> URL: https://issues.apache.org/jira/browse/HBASE-14004
> Project: HBase
> Issue Type: Bug
> Components: regionserver, Replication
> Reporter: He Liangliang
> Assignee: Duo Zhang
> Priority: Critical
> Labels: replication, wal
> Fix For: 3.0.0, 2.0.0-alpha-4
>
> Attachments: HBASE-14004.patch, HBASE-14004-v1.patch,
> HBASE-14004-v2.patch
>
>
> Looks like the current write path can cause inconsistency between
> memstore/hfile and WAL which cause the slave cluster has more data than the
> master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already
> (may partially) transported to the DNs which finally get persisted. As a
> result, the handler will rollback the Memstore and the later flushed HFile
> will also skip this record.
> ==================================
> This is a long lived issue. The above problem is solved by write path
> reorder, as now we will sync wal first before modifying memstore. But the
> problem may still exists as replication thread may read the new data before
> we return from hflush. See this document for more details:
> https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit#
> So we need to keep a sync length in WAL and tell replication wal reader this
> is limit when you read this wal file.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)