[ 
https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032488#comment-15032488
 ] 

Jonathan Leech commented on HBASE-11625:
----------------------------------------

We hit this on our backup / slave cluster. Net effect was compactions failed 
for several regions, and after a few days filled up disks in HDFS. As a 
workaround I set hbase.regionserver.checksum.verify to false and restarted 
HBase, which allowed the compactions to complete and freed up space. Is this 
setting safe to use in general, or should I be worried about slower HBase 
performance? If the underlying cause triggering the bug is file corruption, how 
do I go about diagnosing that?

> Reading datablock throws "Invalid HFile block magic" and can not switch to 
> hdfs checksum 
> -----------------------------------------------------------------------------------------
>
>                 Key: HBASE-11625
>                 URL: https://issues.apache.org/jira/browse/HBASE-11625
>             Project: HBase
>          Issue Type: Bug
>          Components: HFile
>    Affects Versions: 0.94.21, 0.98.4, 0.98.5
>            Reporter: qian wang
>         Attachments: 2711de1fdf73419d9f8afc6a8b86ce64.gz
>
>
> when using hbase checksum,call readBlockDataInternal() in hfileblock.java, it 
> could happen file corruption but it only can switch to hdfs checksum 
> inputstream till validateBlockChecksum(). If the datablock's header corrupted 
> when b = new HFileBlock(),it throws the exception "Invalid HFile block magic" 
> and the rpc call fail



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to