Hi there,

We've encountered the following compaction failure(#1) for a Phoenix table,
and are not sure how to make sense of it. Using the HBase row key from the
error, we are able to query data directly from hbase shell, and by
examining the data, there arent anything immediately obvious about the data
as they seem to be stored consistent with the Phoenix data type (#2). When
querying Phoenix for the given row through sqline, the row would return if
only primary key columns are selected, and the query would not return if
non primary key columns are selected (#3).

Few questions hoping to find some help on:

a. Are we correct in understanding the error message to indicate an issue
with data for the row key (
\x05\x80\x00\x00\x00\x00\x1FT\x9C\x80\x00\x00\x00\x00\x1C}E\x00\x04\x80\x00\x00\x00\x00\x1D\x0F\x19\x80\x00\x00\x00\x00Ij\x9D\x80\x00\x00\x00\x01\xD1W\x13)?
We are not sure what to make sense of the string "
1539019716378.3dcf2b1e057915feb74395d9711ba4ad." that is included with the
row key...

b. What is out of bound here? It's not apparently clear here what
StatisticsScanner and FastDiffDeltaEncoder are tripping over...

c. Is it normal for hbase shell to return some parts of the hex string as
ascii characters? We are seeing that in the row key as well as column name
encoding and value. We are not sure if that is causing any issues, or if
that was just a display issue that we can safely ignore.

*#1 Compaction Failure*

Compaction failed Request =
regionName=qa2.ADGROUPS,\x05\x80\x00\x00\x00\x00\x1FT\x9C\x80\x00\x00\x00\x00\x1C}E\x00\x04\x80\x00\x00\x00\x00\x1D\x0F\x19\x80\x00\x00\x00\x00Ij\x9D\x80\x00\x00\x00\x01\xD1W\x13,1539019716378.3dcf2b1e057915feb74395d9711ba4ad.,
storeName=AG, fileCount=4, fileSize=316.0 M (315.8 M, 188.7 K, 6.8 K,
14.2 K), priority=1, time=40613533856170784
java.lang.IndexOutOfBoundsException
        at java.nio.Buffer.checkBounds(Buffer.java:567)
        at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:149)
        at 
org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:465)
        at 
org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:516)
        at 
org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.next(BufferedDataBlockEncoder.java:618)
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.next(HFileReaderV2.java:1277)
        at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:180)
        at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
        at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:588)
        at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:458)
        at 
org.apache.phoenix.schema.stats.StatisticsScanner.next(StatisticsScanner.java:69)
        at 
org.apache.phoenix.schema.stats.StatisticsScanner.next(StatisticsScanner.java:76)
        at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:334)
        at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:106)
        at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:131)
        at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1245)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1852)
        at 
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:529)
        at 
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:566)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)


*#2 Data on the HBase level*

hbase(main):002:0> get 'qa2.ADGROUPS',
"\x05\x80\x00\x00\x00\x00\x1FT\x9C\x80\x00\x00\x00\x00\x1C}E\x00\x04\x80\x00\x00\x00\x00\x1D\x0F\x19\x80\x00\x00\x00\x00Ij\x9D\x80\x00\x00\x00\x01\xD1W\x13"
COLUMN                              CELL
 AG:\x00\x00\x00\x00                timestamp=1539019457506, value=x
 AG:\x80\x0F                        timestamp=1539019457506,
value=D:USA_AK:Nightmute:1456903:hotel
 AG:\x80\x12                        timestamp=1539019457506,
value=ACTIVE
 AG:\x80\x13                        timestamp=1539019457506, value=ADD
 AG:\x80#                           timestamp=1539019457506,
value=\x80\x00\x00\x00\x00\x00'\x10
 AG:\x80'                           timestamp=1539019457506,
value=\x80\x00\x01[\x97\x02\x02X\x00\x00\x00\x00
 AG:\x80(                           timestamp=1539019457506,
value=\x80\x00\x00\x00\x00\x1D\xEC\xA4
 AG:\x808                           timestamp=1539019457506,
value=\x00
8 row(s) in 0.0510 seconds


*#3 Data in Phoenix via sqline*

0: jdbc:phoenix:qa2-zod-journalnode-lv-101,qa> select "cstId",
"cltId", "pubId", "accId","cpgnId", "id" from "qa2".adgroups where
"id" = 30496531;
+----------+----------+--------+----------+----------+-----------+
|  cstId   |  cltId   | pubId  |  accId   |  cpgnId  |    id     |
+----------+----------+--------+----------+----------+-----------+
| 2053276  | 1867077  | 4      | 1904409  | 4811421  | 30496531  |
+----------+----------+--------+----------+----------+-----------+
1 row selected (0.095 seconds)

0: jdbc:phoenix:qa2-zod-journalnode-lv-101,qa> select "cstId",
"cltId", "pubId", "accId","cpgnId", "id", "stts" from "qa2".adgroups
where "id" = 30496531;

*[hangs]*


Thanks in advance for your help!

Reply via email to