[ 
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12500532
 ] 

Raghu Angadi commented on HADOOP-1134:
--------------------------------------


bq. The total size of the packet in bytes. Having this up front might make it 
easier to, e.g., write an NIO-based datanode that uses async io. Ideally we 
could re-write datanode to be async without modifying the on-the-wire protocol.

I am still not clear which length is missing. Length of common header is a 
constant. Both OP_READ_BLOCK and OP_WRITE_BLOCK include lengths. In the case of 
WRITE, html doc is out of date. I will update.

bq. I don't see the use case for transmitting the start and length with each 
checksum, rather it seems like it only makes sense once per request, no? So why 
not factor it to the OP-level?

E.g. OP_READ_BLOCK:
Right now start_offset is required for the first 'DATA_CHUNK' and length is 
required for last two DATA_CHUNKS (at least one data chunk for sure) to 
indicate end of the stream (for what ever reason). Using 'Vint' will bring the 
byte over head to 5-6. So start_offset can be removed from DATA_CHUNK. I would 
prefer to keep the length so that loops that read and write to these streams 
could be a little simpler.





> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>         Attachments: bc-no-upgrade-05302007.patch, 
> DfsBlockCrcDesign-05305007.htm
>
>
> Currently CRCs are handled at FileSystem level and are transparent to core 
> HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given 
> filesystem ) regd more about it. Though this served us well there a few 
> disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In 
> many cases, it nearly doubles the number of blocks. Taking namenode out of 
> CRCs would nearly double namespace performance both in terms of CPU and 
> memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted 
> blocks. With block level CRCs, Datanode can periodically verify the checksums 
> and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as 
> in GFS. I will update the jira with detailed requirements and design. This 
> will include same guarantees provided by current implementation and will 
> include a upgrade of current data.
>  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to