[
https://issues.apache.org/jira/browse/HDFS-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021195#comment-15021195
]
Xiao Chen commented on HDFS-9314:
---------------------------------
Thanks for the review Ming and Walter.
Patch 5 addressed both points in Ming's comments and Walter's comment #1.
[~walter.k.su], for {{BlockPlacementPolicyWithNodeGroup}}, I think the existing
code is ok. IIUC the code is splitting nodes depending on group names inside
{{pickupReplicaSet}}. After the split it tries to return {{moreThanOne}} if not
empty, otherwise {{exactlyOne}}. This was done by directly copying the old
implementation rather than calling {{super.pickupReplicaSet}}. So we don't need
the default {{pickupReplicaSet}} in either case now, correct?
> Improve BlockPlacementPolicyDefault's picking of excess replicas
> ----------------------------------------------------------------
>
> Key: HDFS-9314
> URL: https://issues.apache.org/jira/browse/HDFS-9314
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Ming Ma
> Assignee: Xiao Chen
> Attachments: HDFS-9314.001.patch, HDFS-9314.002.patch,
> HDFS-9314.003.patch, HDFS-9314.004.patch, HDFS-9314.005.patch
>
>
> The test case used in HDFS-9313 identified NullPointerException as well as
> the limitation of excess replica picking. If the current replicas are on
> {SSD(rack r1), DISK(rack 2), DISK(rack 3), DISK(rack 3)} and the storage
> policy changes to HOT_STORAGE_POLICY_ID, BlockPlacementPolicyDefault's won't
> be able to delete SSD replica.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)