[ 
https://issues.apache.org/jira/browse/HBASE-26527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17454663#comment-17454663
 ] 

Hudson commented on HBASE-26527:
--------------------------------

Results for branch master
        [build #462 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/]:
 (x) *{color:red}-1 overall{color}*
----
details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/General_20Nightly_20Build_20Report/]






(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ArrayIndexOutOfBoundsException in KeyValueUtil.copyToNewKeyValue()
> ------------------------------------------------------------------
>
>                 Key: HBASE-26527
>                 URL: https://issues.apache.org/jira/browse/HBASE-26527
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>    Affects Versions: 2.2.7, 3.0.0-alpha-2
>            Reporter: Istvan Toth
>            Assignee: Istvan Toth
>            Priority: Major
>             Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.9
>
>
> While investigating a Phoenix crash, I've found a possible problem in 
> KeyValueUtil.
> When using Phoenix, we need configure (at least for older versions) 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec as a WAL codec 
> in HBase.
> This codec will eventually serialize standard (not phoenix specifc WAL 
> entries) to the WAL file, and internally converts the Cell objects to 
> KeyValue objects, by building a new byte[].
> This fails with an ArrayIndexOutOfBoundsException, because the we allocate a 
> byte[] the size of Cell.getSerializedSize(), and it seems that we are 
> processing a Cell that does not actually serialize the column family and 
> later fields. 
> However, we are building a traditional KeyValue object for serialization, 
> which does serialize them, hence we run out of bytes.
> I think that since we are writing a KeyValue, we should not rely of the 
> getSerializedSize() method of the source cell, but rather calculate the 
> backing array size based on how KeyValue expects its data to be serialized.
> The stack trace for reference:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 9787
>         at org.apache.hadoop.hbase.util.Bytes.putByte(Bytes.java:502)
>         at 
> org.apache.hadoop.hbase.KeyValueUtil.appendKeyTo(KeyValueUtil.java:142)
>         at 
> org.apache.hadoop.hbase.KeyValueUtil.appendToByteArray(KeyValueUtil.java:156)
>         at 
> org.apache.hadoop.hbase.KeyValueUtil.copyToNewByteArray(KeyValueUtil.java:133)
>         at 
> org.apache.hadoop.hbase.KeyValueUtil.copyToNewKeyValue(KeyValueUtil.java:97)
>         at 
> org.apache.phoenix.util.PhoenixKeyValueUtil.maybeCopyCell(PhoenixKeyValueUtil.java:214)
>         at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueEncoder.write(IndexedWALEditCodec.java:218)
>         at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:59)
>         at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:294)
>         at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:65)
>         at 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendEntry(AbstractFSWAL.java:931)
>         at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1075)
>         at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:964)
>         at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:873)
>         at 
> com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:129)
>         at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Note that I am still not sure exactly what triggers this bug, one possibility 
> is org.apache.hadoop.hbase.ByteBufferKeyOnlyKeyValue



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to