[
https://issues.apache.org/jira/browse/HDFS-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014734#comment-15014734
]
Ming Ma commented on HDFS-9314:
-------------------------------
Thanks [~xiaochen]. This should work.
* The rack-to-hosts mapping {{rackMap}} has already been computed. Do you think
it is clearer if we modify {{pickupReplicaSet}} to take the rack-to-hosts
mapping {{rackMap}} as input instead? In that way, it can say something like:
a) if number of existing replicas is 2, both nodes are candidates. b) if number
of racks >=3, all nodes are candidates. c) if the number of racks <=2, all
nodes that share racks with at least another node are candidates.
* As far as I can tell, this shouldn't impact
{{BlockPlacementPolicyWithUpgradeDomain}}. It will be useful if you can add a
new test case to {{TestReplicationPolicyWithUpgradeDomain}} to make sure it
works. Also the description for {{BlockPlacementPolicyWithUpgradeDomain}}'s
{{pickupReplicaSet}} will need some update as {{BlockPlacementPolicyDefault}}'s
{{pickupReplicaSet}} no longer only return set of share-rack nodes anymore;
instead it will return a super set of share-rack nodes.
> Improve BlockPlacementPolicyDefault's picking of excess replicas
> ----------------------------------------------------------------
>
> Key: HDFS-9314
> URL: https://issues.apache.org/jira/browse/HDFS-9314
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Ming Ma
> Assignee: Xiao Chen
> Attachments: HDFS-9314.001.patch, HDFS-9314.002.patch,
> HDFS-9314.003.patch
>
>
> The test case used in HDFS-9313 identified NullPointerException as well as
> the limitation of excess replica picking. If the current replicas are on
> {SSD(rack r1), DISK(rack 2), DISK(rack 3), DISK(rack 3)} and the storage
> policy changes to HOT_STORAGE_POLICY_ID, BlockPlacementPolicyDefault's won't
> be able to delete SSD replica.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)