[
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482020#comment-16482020
]
Wei-Chiu Chuang commented on HDFS-8884:
---------------------------------------
If I understand the patch correctly, this jira considers decommissioning nodes
when placing blocks. Therefore HDFS-5114 and HDFS-4861 are obsolete.
> Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
> -----------------------------------------------------------
>
> Key: HDFS-8884
> URL: https://issues.apache.org/jira/browse/HDFS-8884
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Yi Liu
> Assignee: Yi Liu
> Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch
>
>
> In current BlockPlacementPolicyDefault, when choosing datanode storage to
> place block, we have following logic:
> {code}
> final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
> chosenNode.getStorageInfos());
> int i = 0;
> boolean search = true;
> for (Iterator<Map.Entry<StorageType, Integer>> iter = storageTypes
> .entrySet().iterator(); search && iter.hasNext(); ) {
> Map.Entry<StorageType, Integer> entry = iter.next();
> for (i = 0; i < storages.length; i++) {
> StorageType type = entry.getKey();
> final int newExcludedNodes = addIfIsGoodTarget(storages[i],
> {code}
> We will iterate (actually two {{for}}, although they are usually small value)
> all storages of the candidate datanode even the datanode itself is not good
> (e.g. decommissioned, stale, too busy..), since currently we do all the check
> in {{addIfIsGoodTarget}}.
> We can fail-fast: check the datanode related conditions first, if the
> datanode is not good, then no need to shuffle and iterate the storages. Then
> it's more efficient.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]