[ 
https://issues.apache.org/jira/browse/HDFS-1630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12996121#comment-12996121
 ] 

Hairong Kuang commented on HDFS-1630:
-------------------------------------

I did some experiments with MD5. For every transaction, a MD5 digest (16 bytes) 
is calculated. But to save the disk overhead, only half of the digest (8 bytes) 
is saved to the fssedit log. I observed only 2% overhead for directory 
creations and deletions in fsnamesystem for an average transaction size of 100 
bytes. This seems negligible.

> the stream to the secondary/backup node should also be checksummed to 
> detect...
If we have checksum for each fsedit transaction, I do not see the need to 
checksum the stream. The only error case that could not be detected by per 
transaction checksum is that the edit log gets truncated at a transaction 
boundary. However, SNN does validate the edit log size. So the truncation can 
be also detected.

> Checksum fsedits
> ----------------
>
>                 Key: HDFS-1630
>                 URL: https://issues.apache.org/jira/browse/HDFS-1630
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>
> HDFS-903 calculates a MD5 checksum to a saved image, so that we could verify 
> the integrity of the image at the loading time.
> The other half of the story is how to verify fsedits. Similarly we could use 
> the checksum approach. But since a fsedit file is growing constantly, a 
> checksum per file does not work. I am thinking to add a checksum per 
> transaction. Is it doable or too expensive?

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to