[
https://issues.apache.org/jira/browse/HDFS-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16848654#comment-16848654
]
Ayush Saxena commented on HDFS-14512:
-------------------------------------
I am able to reproduce the issue, I have attached a test to reproduce it.
Well I didn't believe it to be like this. I digged in to check the reason for
it. Actually if favored nodes are specified the storage types are chosen wrt to
BPP amongst the favored node and once the favored nodes are exhausted, it
fallback to the existing BPP from start disregarding the already chosen storage
type, it just takes forward the remaining target number. Which I feel, not a
good way to go ahead. It should take ahead the existing stroageType array and
choose according to it rather than creating a new one once again as per the
BPP. If there is no specific reason for this behavior and no one has objections
regarding the newer approach, will shoot a fix soon passing the storageTypes
too, ahead.
HDFS-9393 may be little related, to link and can check if those guyz have any
reasons for not doing so.
> ONE_SSD policy will be violated while write data with
> DistributedFileSystem.create(....favoredNodes)
> ----------------------------------------------------------------------------------------------------
>
> Key: HDFS-14512
> URL: https://issues.apache.org/jira/browse/HDFS-14512
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Shen Yinjie
> Priority: Major
> Attachments: TestToRepro.patch
>
>
> Reproduce steps:
> 1.setStoragePolicy ONE_SSD for a path A;
> 2. client write data to path A by
> DistributedFileSystem.create(...favoredNodes) and Passing parameter
> favoredNodes
> then, three replicas of file in this path will be located in 2 SSD and
> 1DISK,which is violating the ONE_SSD policy.
> Not sure am I clear?
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]