[ 
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12486433
 ] 

Doug Cutting commented on HADOOP-1134:
--------------------------------------

> After the upgrade, I think it is cleaner and simpler to treat this as hard 
> error on the block.

If the only copy of a block has no CRC, shouldn't we still permit folks to 
access the data somehow?  I don't think we should just remove the block in that 
case.

> If our software is so buggy that we need to expect CRC file not to exists and 
> handle it as an 'expected condition', I think it would be better to spend 
> more time fixing those bugs.

It's not expected, and it should normally cause an exception to be thrown by 
the client.  But folks should still be able to scavenge their data by setting a 
config parameter that permits them to access the data even if it doesn't have a 
checksum.  Wouldn't that be preferable to data loss?


> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core 
> HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given 
> filesystem ) regd more about it. Though this served us well there a few 
> disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In 
> many cases, it nearly doubles the number of blocks. Taking namenode out of 
> CRCs would nearly double namespace performance both in terms of CPU and 
> memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted 
> blocks. With block level CRCs, Datanode can periodically verify the checksums 
> and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as 
> in GFS. I will update the jira with detailed requirements and design. This 
> will include same guarantees provided by current implementation and will 
> include a upgrade of current data.
>  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to