[
https://issues.apache.org/jira/browse/HBASE-10718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13932052#comment-13932052
]
Hudson commented on HBASE-10718:
--------------------------------
FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #48 (See
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/48/])
HBASE-10718 TestHLogSplit fails when it sets a KV size to be negative (Esteban
Gutierrez) (apurtell: rev 1576789)
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/KeyValue.java
*
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/TestSerialization.java
> TestHLogSplit fails when it sets a KV size to be negative
> ---------------------------------------------------------
>
> Key: HBASE-10718
> URL: https://issues.apache.org/jira/browse/HBASE-10718
> Project: HBase
> Issue Type: Bug
> Components: wal
> Affects Versions: 0.98.0, 0.99.0, 0.96.1.1, 0.94.17
> Reporter: Esteban Gutierrez
> Assignee: Esteban Gutierrez
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18
>
> Attachments: HBASE-10718.v0.txt, HBASE-10718.v1.txt,
> HBASE-10718.v2.txt, HBASE-10718.v3-0.94.txt, HBASE-10718.v3.txt
>
>
> From [~jdcryans]:
> {code}
> java.lang.NegativeArraySizeException
> at org.apache.hadoop.hbase.KeyValue.readFields(KeyValue.java:2259)
> at org.apache.hadoop.hbase.KeyValue.readFields(KeyValue.java:2266)
> at
> org.apache.hadoop.hbase.codec.KeyValueCodec$KeyValueDecoder.parseCell(KeyValueCodec.java:64)
> at
> org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:46)
> at
> org.apache.hadoop.hbase.regionserver.wal.WALEdit.readFields(WALEdit.java:222)
> at
> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2114)
> at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2242)
> at
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:245)
> at
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:214)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:799)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:727)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:307)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:217)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:180)
> at
> org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit.testMiddleGarbageCorruptionSkipErrorsReadsHalfOfFile(TestHLogSplit.java:363)
> ...
> {code}
> It seems to me that we're reading a negative length which we use to create
> the byte array and since it's not an IOE we don't treat it as a corrupted
> log. I'm surprised that not a single build has failed like this in the past 3
> years.
--
This message was sent by Atlassian JIRA
(v6.2#6252)