[
https://issues.apache.org/jira/browse/HBASE-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074699#comment-15074699
]
Duo Zhang commented on HBASE-14949:
-----------------------------------
{quote}
This only happens when all datanodes crash, right?
{quote}
This depends on how you define crash. A network problem with enough time could
also lead to this. If all datanodes die then the problem will be data lost...
And still I'd say this is irrelevant with HBASE-15046...
> Skip duplicate entries when replay WAL.
> ---------------------------------------
>
> Key: HBASE-14949
> URL: https://issues.apache.org/jira/browse/HBASE-14949
> Project: HBase
> Issue Type: Sub-task
> Reporter: Heng Chen
> Attachments: HBASE-14949.patch, HBASE-14949_v1.patch,
> HBASE-14949_v2.patch
>
>
> As HBASE-14004 design, there will be duplicate entries in different WAL. It
> happens when one hflush failed, we will close old WAL with 'acked hflushed'
> length, then open a new WAL and write the unacked hlushed entries into it.
> So there maybe some overlap between old WAL and new WAL.
> We should skip the duplicate entries when replay. I think it has no harm to
> current logic, maybe we do it first.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)