[
https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15284207#comment-15284207
]
Rakesh R commented on HDFS-9833:
--------------------------------
The attached patch is addressing only one target datanode failure at a time and
reconstruction it. I meant, when iterating over blockGroup, if it finds a
missing index or an exception then will reconstruct this index data and
re-calculate the block checksum for this block. How about optimizing the
checksum recomputation logic to address multiple datanode failures and
reconstructing it together through another sub-task?
> Erasure coding: recomputing block checksum on the fly by reconstructing the
> missed/corrupt block data
> -----------------------------------------------------------------------------------------------------
>
> Key: HDFS-9833
> URL: https://issues.apache.org/jira/browse/HDFS-9833
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Kai Zheng
> Assignee: Rakesh R
> Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-9833-00-draft.patch
>
>
> As discussed in HDFS-8430 and HDFS-9694, to compute striped file checksum
> even some of striped blocks are missed, we need to consider recomputing block
> checksum on the fly for the missed/corrupt blocks. To recompute the block
> checksum, the block data needs to be reconstructed by erasure decoding, and
> the main needed codes for the block reconstruction could be borrowed from
> HDFS-9719, the refactoring of the existing {{ErasureCodingWorker}}. In EC
> worker, reconstructed blocks need to be written out to target datanodes, but
> here in this case, the remote writing isn't necessary, as the reconstructed
> block data is only used to recompute the checksum.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]