[ https://issues.apache.org/jira/browse/HBASE-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144851#comment-15144851 ]
Hudson commented on HBASE-15252: -------------------------------- FAILURE: Integrated in HBase-1.0 #1145 (See [https://builds.apache.org/job/HBase-1.0/1145/]) HBASE-15252 Data loss when replaying wal if HDFS timeout (zhangduo: rev 21ab1843c524c670bab54db9a0082d3439fa7baa) * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogReader.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java > Data loss when replaying wal if HDFS timeout > -------------------------------------------- > > Key: HBASE-15252 > URL: https://issues.apache.org/jira/browse/HBASE-15252 > Project: HBase > Issue Type: Bug > Components: wal > Affects Versions: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.17 > Reporter: Duo Zhang > Assignee: Duo Zhang > Priority: Blocker > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4, 0.98.18 > > Attachments: HBASE-15252-addendum-0.98.patch, > HBASE-15252-testcase.patch, HBASE-15252-v1.patch, HBASE-15252.patch > > > This is a problem introduced by HBASE-13825 where we change the exception > type in catch block in {{readNext}} method of {{ProtobufLogReader}}. > {code:title=ProtobufLogReader.java} > try { > ...... > ProtobufUtil.mergeFrom(builder, new > LimitInputStream(this.inputStream, size), > (int)size); > } catch (IOException ipbe) { // <------ used to be > InvalidProtocolBufferException > throw (EOFException) new EOFException("Invalid PB, EOF? Ignoring; > originalPosition=" + > originalPosition + ", currentPosition=" + > this.inputStream.getPos() + > ", messageSize=" + size + ", currentAvailable=" + > available).initCause(ipbe); > } > {code} > Here if the {{inputStream}} throws an {{IOException}} due to timeout or > something, we just convert it to an {{EOFException}} and at the bottom of > this method, we ignore {{EOFException}} and return false. This cause the > upper layer think we reach the end of file. So when replaying we will treat > the HDFS timeout error as a normal end of file and cause data loss. -- This message was sent by Atlassian JIRA (v6.3.4#6332)