[
https://issues.apache.org/jira/browse/HBASE-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14599984#comment-14599984
]
Jerry He commented on HBASE-13962:
----------------------------------
You can use the hfile tool to try to read the hfiles in question given full
path.
1. Use 'hbase hfile' command. or
2. Use 'hbase org.apache.hadoop.hbase.io.hfile.HFile'
For example, hbase org.apache.hadoop.hbase.io.hfile.HFile -v -p -m -f
/hbase/data/default/table1/581b7f5a4ed620711702e196a739baeb/family/59f4f7692e334f148993cf7204be4513
You can try different options of the command.
Looks like your hfile is corrupted given the bunch of \x00 I see.
> Invalid HFile block magic
> -------------------------
>
> Key: HBASE-13962
> URL: https://issues.apache.org/jira/browse/HBASE-13962
> Project: HBase
> Issue Type: Bug
> Affects Versions: 0.98.12.1
> Environment: hadoop 1.2.1
> hbase 0.98.12.1
> jdk 1.7.0.79
> os : ubuntu 12.04.1 amd64
> Reporter: reaz hedayati
>
> hi every body
> my table has some cell that load with bulk load scenario and some cells for
> increment.
> we use 2 job to load data into table, first job use increment in reduce site
> and second job use bulk load.
> first we run increment job, next run bulk job and run completebulkload job,
> after that we got this exception:
> 2015-06-24 17:40:01,557 INFO
> [regionserver60020-smallCompactions-1434448531302] regionserver.HRegion:
> Starting compaction on c2 in region table1,\x04C#P1"\x07\x94
> ,1435065082383.0fe38a6c782600e4d46f1f148144b489.
> 2015-06-24 17:40:01,558 INFO
> [regionserver60020-smallCompactions-1434448531302] regionserver.HStore:
> Starting compaction of 3 file(s) in c2 of table1,\x04C#P1"\x07\x94
> ,1435065082383.0fe38a6c782600e4d46f1f148144b489. into
> tmpdir=hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/.tmp,
> totalSize=43.1m
> 2015-06-24 17:40:01,558 DEBUG
> [regionserver60020-smallCompactions-1434448531302]
> regionserver.StoreFileInfo: reference
> 'hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5'
> to region=d21f8ee8b3c915fd9e1c143a0f1892e5
> hfile=6b1249a3b474474db5cf6c664f2d98dc
> 2015-06-24 17:40:01,558 DEBUG
> [regionserver60020-smallCompactions-1434448531302] compactions.Compactor:
> Compacting
> hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5-hdfs://m2/hbase2/data/default/table1/d21f8ee8b3c915fd9e1c143a0f1892e5/c2/6b1249a3b474474db5cf6c664f2d98dc-top,
> keycount=575485, bloomtype=ROW, size=20.8m, encoding=NONE, seqNum=9,
> earliestPutTs=1434875448405
> 2015-06-24 17:40:01,558 DEBUG
> [regionserver60020-smallCompactions-1434448531302] compactions.Compactor:
> Compacting
> hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/41e13b20ee79435ebc260d11d3bf9920_SeqId_11_,
> keycount=562988, bloomtype=ROW, size=10.1m, encoding=NONE, seqNum=11,
> earliestPutTs=1435076732205
> 2015-06-24 17:40:01,558 DEBUG
> [regionserver60020-smallCompactions-1434448531302] compactions.Compactor:
> Compacting
> hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/565c45ff05b14a419978834c86defa1a_SeqId_12_,
> keycount=554577, bloomtype=ROW, size=12.2m, encoding=NONE, seqNum=12,
> earliestPutTs=1435136926850
> 2015-06-24 17:40:01,560 ERROR
> [regionserver60020-smallCompactions-1434448531302]
> regionserver.CompactSplitThread: Compaction failed Request =
> regionName=table1,\x04C#P1"\x07\x94
> ,1435065082383.0fe38a6c782600e4d46f1f148144b489., storeName=c2, fileCount=3,
> fileSize=43.1m (20.8m, 10.1m, 12.2m), priority=1, time=6077271921381072
> java.io.IOException: Could not seek
> StoreFileScanner[org.apache.hadoop.hbase.io.HalfStoreFileReader$1@1d1eb574,
> cur=null] to key /c2:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/mvcc=0
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:164)
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:252)
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:214)
> at
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.createScanner(Compactor.java:299)
> at
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:87)
> at
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:112)
> at
> org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1113)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1519)
> at
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:498)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Failed to read compressed block at 10930320,
> onDiskSizeWithoutHeader=22342, preReadHeaderSize=33, header.length=33, header
> bytes:
> \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1549)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1413)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:394)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:539)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:560)
> at
> org.apache.hadoop.hbase.io.hfile.AbstractHFileReader$Scanner.seekTo(AbstractHFileReader.java:308)
> at
> org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekTo(HalfStoreFileReader.java:205)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> ... 12 more
> Caused by: java.io.IOException: Invalid HFile block magic:
> \x00\x00\x00\x00\x00\x00\x00\x00
> at
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:165)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock.<init>(HFileBlock.java:252)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1546)
> ... 21 more
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)