[ 
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raghu Angadi updated HADOOP-1134:
---------------------------------

    Attachment: BlockLevelCrc-07062007.patch

Attaching the latest patch (07062007) for weekend perusal :)

This one extends FSInputChecker for BlockReader class in DFSClient.

This does not use FSOutputSummer.

Doug, I don't think it is really necessary for my application. It probably 
removes 5 lines and adds whatever is required for using the inteface. Also, for 
writes smaller than bytesPerChecksum, FSOutputSummer buffers once more. This 
extra buffering is sort of required for InputChecker but not really required 
while writing, since writing is very simple. I hope that is ok. I wonder how 
big are map-reduce writes?

> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>         Attachments: BlockLevelCrc-07032007.patch, 
> BlockLevelCrc-07052007.patch, BlockLevelCrc-07062007.patch, 
> BlockLevelCrc-07062007.patch, DfsBlockCrcDesign-05305007.htm, readBuffer.java
>
>
> Currently CRCs are handled at FileSystem level and are transparent to core 
> HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given 
> filesystem ) regd more about it. Though this served us well there a few 
> disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In 
> many cases, it nearly doubles the number of blocks. Taking namenode out of 
> CRCs would nearly double namespace performance both in terms of CPU and 
> memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted 
> blocks. With block level CRCs, Datanode can periodically verify the checksums 
> and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as 
> in GFS. I will update the jira with detailed requirements and design. This 
> will include same guarantees provided by current implementation and will 
> include a upgrade of current data.
>  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to