[ https://issues.apache.org/jira/browse/HBASE-6719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Andrew Kyle Purtell resolved HBASE-6719. ---------------------------------------- Assignee: (was: terry zhang) Resolution: Abandoned > [replication] Data will lose if open a Hlog failed more than > maxRetriesMultiplier > --------------------------------------------------------------------------------- > > Key: HBASE-6719 > URL: https://issues.apache.org/jira/browse/HBASE-6719 > Project: HBase > Issue Type: Bug > Components: Replication > Affects Versions: 2.0.0 > Reporter: terry zhang > Priority: Critical > Attachments: 6719.txt, hbase-6719.patch > > > Please Take a look below code > {code:title=ReplicationSource.java|borderStyle=solid} > protected boolean openReader(int sleepMultiplier) { > { > ... > catch (IOException ioe) { > LOG.warn(peerClusterZnode + " Got: ", ioe); > // TODO Need a better way to determinate if a file is really gone but > // TODO without scanning all logs dir > if (sleepMultiplier == this.maxRetriesMultiplier) { > LOG.warn("Waited too long for this file, considering dumping"); > return !processEndOfFile(); // Open a file failed over > maxRetriesMultiplier(default 10) > } > } > return true; > ... > } > protected boolean processEndOfFile() { > if (this.queue.size() != 0) { // Skipped this Hlog . Data loss > this.currentPath = null; > this.position = 0; > return true; > } else if (this.queueRecovered) { // Terminate Failover Replication > source thread ,data loss > this.manager.closeRecoveredQueue(this); > LOG.info("Finished recovering the queue"); > this.running = false; > return true; > } > return false; > } > {code} > Some Time HDFS will meet some problem but actually Hlog file is OK , So after > HDFS back ,Some data will lose and can not find them back in slave cluster. -- This message was sent by Atlassian Jira (v8.20.7#820007)