[
https://issues.apache.org/jira/browse/HDFS-6700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14066266#comment-14066266
]
Vinayakumar B commented on HDFS-6700:
-------------------------------------
On the first look, my doubt is after choosing an excess storage, still adding
DatanodeDescriptor itself to excessReplicateMap and invalidates list
{code}
nonExcess.remove(cur);
- addToExcessReplicate(cur, b);
+ addToExcessReplicate(cur.getDatanodeDescriptor(), b);
//
// The 'excessblocks' tracks blocks until we get confirmation
// that the datanode has deleted them; the only way we remove them
// is when we get a "removeBlock" message.
//
// The 'invalidate' list is used to inform the datanode the block
// should be deleted. Items are removed from the invalidate list
// upon giving instructions to the namenode.
//
- addToInvalidates(b, cur);
+ addToInvalidates(b, cur.getDatanodeDescriptor());
{code}
Do you think these things also needs to be changed?
> BlockPlacementPolicy shoud choose storage but not datanode for deletion
> -----------------------------------------------------------------------
>
> Key: HDFS-6700
> URL: https://issues.apache.org/jira/browse/HDFS-6700
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Reporter: Tsz Wo Nicholas Sze
> Assignee: Tsz Wo Nicholas Sze
> Priority: Minor
> Attachments: h6700_20140717.patch, h6700_20140717b.patch
>
>
> HDFS-2832 changed datanode storage model from a single storage, which may
> correspond to multiple physical storage medias, to a collection of storages
> with each storage corresponding to a physical storage media.
> BlockPlacementPolicy.chooseReplicaToDelete still chooses replica in term of
> DatanodeDescriptor but not DatanodeStorageInfo.
--
This message was sent by Atlassian JIRA
(v6.2#6252)