[
https://issues.apache.org/jira/browse/HDFS-8619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14594201#comment-14594201
]
Jing Zhao commented on HDFS-8619:
---------------------------------
The scenario mentioned in the description has been fixed by HDFS-8543: the
{{countNodes}} method actually checks {{excessReplicateMap}} for the excess
replica information, which is correctly updated with the fix from HDFS-8543.
However, the corrupted internal blocks currently cannot be correctly tracked or
counted. Specifically, the {{CorruptReplicasMap}} should track all the DNs with
corrupted internal blocks for the same striped block group, so that
{{countNodes}} can use this information to compute the #. I will use this jira
to fix this part and add more unit tests covering reporting bad blocks from
DFSClient to NN.
> Erasure Coding: revisit replica counting for striped blocks
> -----------------------------------------------------------
>
> Key: HDFS-8619
> URL: https://issues.apache.org/jira/browse/HDFS-8619
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Jing Zhao
> Assignee: Jing Zhao
>
> Currently we use the same {{BlockManager#countNodes}} method for striped
> blocks, which simply treat each internal block as a replica. However, for a
> striped block, we may have more complicated scenario, e.g., we have multiple
> replicas of the first internal block while we miss some other internal
> blocks. Using the current {{countNodes}} methods can lead to wrong decision
> in these scenarios.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)