[
https://issues.apache.org/jira/browse/HBASE-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13813640#comment-13813640
]
Ted Yu commented on HBASE-9818:
-------------------------------
Encountered NPE in the else branch (with patch v1):
{code}
2013-11-05 04:23:17,524 DEBUG [Thread-776]
compactions.ExploringCompactionPolicy(122): Exploring compaction algorithm has
selected 3 files of size 3318 starting at candidate #24 after considering 1884
permutations with 1884 in ratio
2013-11-05 04:23:17,524 DEBUG [Thread-776] regionserver.HStore(1359):
fa933591407f5591c2410908ea5065f9 - colfamily11: Initiating minor compaction
2013-11-05 04:23:17,524 INFO [Thread-776] regionserver.HRegion(1288): Starting
compaction on colfamily11 in region
testRowMutationMultiThreads,,1383625393150.fa933591407f5591c2410908ea5065f9.
2013-11-05 04:23:17,524 INFO [Thread-776] regionserver.HStore(1001): Starting
compaction of 3 file(s) in colfamily11 of
testRowMutationMultiThreads,,1383625393150.fa933591407f5591c2410908ea5065f9.
into
tmpdir=/grid/0/dev/ty/trunk/hbase-server/target/test-data/1a023b65-addc-4c5d-acaf-e346220aff31/data/default/testRowMutationMultiThreads/fa933591407f5591c2410908ea5065f9/.tmp,
totalSize=3.2 K
2013-11-05 04:23:17,525 DEBUG [Thread-776] compactions.Compactor(147):
Compacting
file:/grid/0/dev/ty/trunk/hbase-server/target/test-data/1a023b65-addc-4c5d-acaf-e346220aff31/data/default/testRowMutationMultiThreads/fa933591407f5591c2410908ea5065f9/colfamily11/94f3260116304004853721be928701f9,
keycount=1, bloomtype=ROW, size=1.1 K, encoding=NONE, seqNum=1331
2013-11-05 04:23:17,525 DEBUG [Thread-776] compactions.Compactor(147):
Compacting
file:/grid/0/dev/ty/trunk/hbase-server/target/test-data/1a023b65-addc-4c5d-acaf-e346220aff31/data/default/testRowMutationMultiThreads/fa933591407f5591c2410908ea5065f9/colfamily11/795449ca77094e769ac390749c89f92b,
keycount=1, bloomtype=ROW, size=1.1 K, encoding=NONE, seqNum=1342
2013-11-05 04:23:17,525 DEBUG [Thread-776] compactions.Compactor(147):
Compacting
file:/grid/0/dev/ty/trunk/hbase-server/target/test-data/1a023b65-addc-4c5d-acaf-e346220aff31/data/default/testRowMutationMultiThreads/fa933591407f5591c2410908ea5065f9/colfamily11/9e2cc96151754cedbc0d461653cbea63,
keycount=1, bloomtype=ROW, size=1.1 K, encoding=NONE, seqNum=1353
2013-11-05 04:23:17,531 DEBUG [Thread-776] regionserver.HRegionFileSystem(338):
Committing store file
/grid/0/dev/ty/trunk/hbase-server/target/test-data/1a023b65-addc-4c5d-acaf-e346220aff31/data/default/testRowMutationMultiThreads/fa933591407f5591c2410908ea5065f9/.tmp/3ac822791e534965b35b0b3d5022881a
as
/grid/0/dev/ty/trunk/hbase-server/target/test-data/1a023b65-addc-4c5d-acaf-e346220aff31/data/default/testRowMutationMultiThreads/fa933591407f5591c2410908ea5065f9/colfamily11/3ac822791e534965b35b0b3d5022881a
2013-11-05 04:23:17,532 DEBUG [Thread-776] regionserver.HStore(1430): Removing
store files after compaction...
Exception in thread "Thread-777" java.lang.NullPointerException
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1230)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1493)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1324)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:360)
2013-11-05 04:23:17,533 DEBUG [Thread-776] backup.HFileArchiver(438): Finished
archiving from class
org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile,
file:file:/grid/0/dev/ty/trunk/hbase-server/target/test-data/1a023b65-addc-4c5d-acaf-e346220aff31/data/default/testRowMutationMultiThreads/fa933591407f5591c2410908ea5065f9/colfamily11/94f3260116304004853721be928701f9,
to
file:/grid/0/dev/ty/trunk/hbase-server/target/test-data/1a023b65-addc-4c5d-acaf-e346220aff31/archive/data/default/testRowMutationMultiThreads/fa933591407f5591c2410908ea5065f9/colfamily11/94f3260116304004853721be928701f9
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:776)
at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:234)
at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:149)
at
org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:172)
at
org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1688)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:3427)
at
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1746)
at
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1738)
2013-11-05 04:23:17,534 DEBUG [Thread-776] backup.HFileArchiver(438): Finished
archiving from class
org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile,
file:file:/grid/0/dev/ty/trunk/hbase-server/target/test-data/1a023b65-addc-4c5d-acaf-e346220aff31/data/default/testRowMutationMultiThreads/fa933591407f5591c2410908ea5065f9/colfamily11/795449ca77094e769ac390749c89f92b,
to
file:/grid/0/dev/ty/trunk/hbase-server/target/test-data/1a023b65-addc-4c5d-acaf-e346220aff31/archive/data/default/testRowMutationMultiThreads/fa933591407f5591c2410908ea5065f9/colfamily11/795449ca77094e769ac390749c89f92b
at
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1715)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4364)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4339)
at
org.apache.hadoop.hbase.regionserver.TestAtomicOperation$2.run(TestAtomicOperation.java:360)
{code}
> NPE in HFileBlock#AbstractFSReader#readAtOffset
> -----------------------------------------------
>
> Key: HBASE-9818
> URL: https://issues.apache.org/jira/browse/HBASE-9818
> Project: HBase
> Issue Type: Bug
> Reporter: Jimmy Xiang
> Attachments: 9818-v1.txt
>
>
> HFileBlock#istream seems to be null. I was wondering should we hide
> FSDataInputStreamWrapper#useHBaseChecksum.
> By the way, this happened when online schema change is enabled (encoding)
> {noformat}
> 2013-10-22 10:58:43,321 ERROR [RpcServer.handler=28,port=36020]
> regionserver.HRegionServer:
> java.lang.NullPointerException
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1200)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1436)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:359)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:503)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:553)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:245)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:166)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:361)
> at
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:336)
> at
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:293)
> at
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:258)
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:603)
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:476)
> at
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:129)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3546)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3616)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3494)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3485)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3079)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:27022)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
> at
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:724)
> 2013-10-22 10:58:43,665 ERROR [RpcServer.handler=23,port=36020]
> regionserver.HRegionServer:
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected
> nextCallSeq: 53438 But the nextCallSeq got from client: 53437;
> request=scanner_id: 1252577470624375060 number_of_rows: 100 close_scanner:
> false next_call_seq: 53437
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3030)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:27022)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
> at
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:724)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.1#6144)