[ 
https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15047976#comment-15047976
 ] 

Phil Yang commented on HBASE-14004:
-----------------------------------

{quote}
And as our design, we only use hsync to ensure data inconsistency in 
replication but data lost still happen because we NOT use 'hsync' in write 
path. If so, why NOT we just use hflush?
{quote}

This is why I think hsync should be configurable. For a database, mostly we 
should have the guarantee of data persistence. But sometimes we will sacrifice 
it for higher performance. For example, Redis's aof can be configured to fsync 
every write, each second or never. Users can configure it according to their 
requirement. Although "each second" will still lose data after crashing but 
users will be guaranteed that they will lose data in at most one second, which 
is still a valuable guarantee. However, currently HBase has no guarantee about 
this and users may thought their data has already saved on disks, and we have 
no idea when our data will be saved on disks. Both WAL and SSTable have this 
issue. And obviously this will result in data inconsistency between two 
clusters, too.

I will not oppose if we only fix (or fix it first) the issue that we 
transferring data which has been rollbacked in MemStore. And we can use the ack 
size of hflush which means there won't be additional latency between two 
clusters. But I think there should be a follow-up work on data persistence and 
we need a configurable hsync in HBase :)

> [Replication] Inconsistency between Memstore and WAL may result in data in 
> remote cluster that is not in the origin
> -------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-14004
>                 URL: https://issues.apache.org/jira/browse/HBASE-14004
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: He Liangliang
>            Priority: Critical
>              Labels: replication, wal
>
> Looks like the current write path can cause inconsistency between 
> memstore/hfile and WAL which cause the slave cluster has more data than the 
> master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already  
> (may partially) transported to the DNs which finally get persisted. As a 
> result, the handler will rollback the Memstore and the later flushed HFile 
> will also skip this record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to