[ 
https://issues.apache.org/jira/browse/HDFS-8430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15080487#comment-15080487
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8430:
-------------------------------------------

[~drankye], I think it is a good start for the first implementation.  We may 
improve it later on.  Some ideas:
# Instead of sending all CRCs to the client, send all CRCs to one of the 
datanode in a block group.  The datanode computes the block MD5s and returns 
them to the client.  Then, the computation becomes distributed.
# We may consider changing the checksum algorithm for replicated files 
(although it is incompatible with the old clusters)
## Use CRC64 (or some other linear code) for block checksum instead of MD5.  
The datanode may compute cell CRC64s and then send them to a client (or a 
datanode). We may combine the cell CRC64s to obtain the block CRC64 since the 
code is linear.  Since datanodes send cell checksums instead of data checksums, 
the network overhead becomes ignorable.
## Or simply compute cell checksums for replicated files instead of block 
checksums.

> Erasure coding: update DFSClient.getFileChecksum() logic for stripe files
> -------------------------------------------------------------------------
>
>                 Key: HDFS-8430
>                 URL: https://issues.apache.org/jira/browse/HDFS-8430
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>    Affects Versions: HDFS-7285
>            Reporter: Walter Su
>            Assignee: Kai Zheng
>         Attachments: HDFS-8430-poc1.patch
>
>
> HADOOP-3981 introduces a  distributed file checksum algorithm. It's designed 
> for replicated block.
> {{DFSClient.getFileChecksum()}} need some updates, so it can work for striped 
> block group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to