[
https://issues.apache.org/jira/browse/HDFS-9734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Zhe Zhang updated HDFS-9734:
----------------------------
Resolution: Fixed
Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
Target Version/s: 3.0.0
Status: Resolved (was: Patch Available)
Committed to trunk. Thanks Kai for the work!
> Refactoring of checksum failure report related codes
> ----------------------------------------------------
>
> Key: HDFS-9734
> URL: https://issues.apache.org/jira/browse/HDFS-9734
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Kai Zheng
> Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-12744-v1.patch, HADOOP-12744-v2.patch,
> HDFS-9734-v3.patch, HDFS-9734-v4.patch, HDFS-9734-v5.patch,
> HDFS-9734-v6.patch, HDFS-9734-v7.patch, HDFS-9734-v8.patch
>
>
> This was from discussion with [~jingzhao] in HDFS-9646. There is some
> duplicate codes between client and datanode sides:
> {code}
> private void addCorruptedBlock(ExtendedBlock blk, DatanodeInfo node,
> Map<ExtendedBlock, Set<DatanodeInfo>> corruptionMap) {
> Set<DatanodeInfo> dnSet = corruptionMap.get(blk);
> if (dnSet == null) {
> dnSet = new HashSet<>();
> corruptionMap.put(blk, dnSet);
> }
> if (!dnSet.contains(node)) {
> dnSet.add(node);
> }
> }
> {code}
> This would resolve the duplication and also simplify the codes some bit.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)