[ 
https://issues.apache.org/jira/browse/HBASE-25827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashank Thillainathan updated HBASE-25827:
-------------------------------------------
    Description: 
Incrementing with per cell TTL and flushing corrupts the HFile.

 

Reproducing the issue:
 Incrementing a row and a column with per cell TTL for about 3 thousand times 
and flushing corrupts the HFile leaving the table unusable.

Cause:
 On reading the HFile, it is found that duplicate TTL tags get appended for 
each cell.

Though this case has already been addressed here at HBASE-18030, corruption 
still occurs even with this patch.
{code:java}
java.lang.IllegalStateException: Invalid currTagsLen -32767. Block offset: 
16665, block length: 65596, position: 0 (without header). 
path=hdfs://hdfs/file/path
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.checkTagsLen(HFileReaderImpl.java:642)
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readKeyValueLen(HFileReaderImpl.java:630)
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.updateCurrentBlock(HFileReaderImpl.java:1206)
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.loadBlockAndSeekToKey(HFileReaderImpl.java:1149)
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:863)
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:837)
        at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:347)
        at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:256)
        at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:469)
        at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:369)
        at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:311)
        at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:275)
        at 
org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:1038)
        at 
org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:1029)
        at 
org.apache.hadoop.hbase.regionserver.StoreScanner.seekOrSkipToNextColumn(StoreScanner.java:764)
        at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:695)
        at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
        at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6593)
        at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6757)
        at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6527)
        at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:6504)
        at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:6491)
        at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7458)
        at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7436)
        at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:8123)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.reckonDeltasByStore(HRegion.java:8003)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.reckonDeltas(HRegion.java:7958)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.doDelta(HRegion.java:7805)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:7767)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:734)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:877)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2705)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42290)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
{code}
 
  
  
  

  was:
Incrementing with per cell TTL and flushing corrupts the HFile.

 

Reproducing the issue:
Incrementing a row and a column with per cell TTL for about 3 thousand times 
and flushing corrupts the HFile leaving the table unusable.


 Cause:
 On reading the HFile, it is found that duplicate TTL tags get appended for 
each cell.

Though this case has already been addressed here at HBASE-18030, corruption 
still occurs even with this patch.
{code:java}
java.lang.IllegalStateException: Invalid currTagsLen -31260. Block offset: 
250962, block length: 76568, position: 42207 (without header). 
path=hdfs://hdfs/file/path
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.checkTagsLen(HFileReaderImpl.java:642)
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readKeyValueLen(HFileReaderImpl.java:630)
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1080)
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097)
        at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208)
        at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120)
        at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:654)
        at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:388)
        at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:327)
        at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
        at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
        at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1432)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2192)
        at 
org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:577)
        at 
org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:619)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
{code}
 
  
  
  


> Per Cell TTL tags get duplicated with increments causing tags length overflow
> -----------------------------------------------------------------------------
>
>                 Key: HBASE-25827
>                 URL: https://issues.apache.org/jira/browse/HBASE-25827
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>    Affects Versions: 2.1.9, 2.2.6
>            Reporter: Shashank Thillainathan
>            Priority: Critical
>
> Incrementing with per cell TTL and flushing corrupts the HFile.
>  
> Reproducing the issue:
>  Incrementing a row and a column with per cell TTL for about 3 thousand times 
> and flushing corrupts the HFile leaving the table unusable.
> Cause:
>  On reading the HFile, it is found that duplicate TTL tags get appended for 
> each cell.
> Though this case has already been addressed here at HBASE-18030, corruption 
> still occurs even with this patch.
> {code:java}
> java.lang.IllegalStateException: Invalid currTagsLen -32767. Block offset: 
> 16665, block length: 65596, position: 0 (without header). 
> path=hdfs://hdfs/file/path
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.checkTagsLen(HFileReaderImpl.java:642)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readKeyValueLen(HFileReaderImpl.java:630)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.updateCurrentBlock(HFileReaderImpl.java:1206)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.loadBlockAndSeekToKey(HFileReaderImpl.java:1149)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:863)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:837)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:347)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:256)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:469)
>         at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:369)
>         at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:311)
>         at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:275)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:1038)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:1029)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekOrSkipToNextColumn(StoreScanner.java:764)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:695)
>         at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6593)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6757)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6527)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:6504)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:6491)
>         at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7458)
>         at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7436)
>         at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:8123)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.reckonDeltasByStore(HRegion.java:8003)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.reckonDeltas(HRegion.java:7958)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.doDelta(HRegion.java:7805)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:7767)
>         at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:734)
>         at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:877)
>         at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2705)
>         at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42290)
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>         at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>         at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> {code}
>  
>   
>   
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to