[
https://issues.apache.org/jira/browse/HDFS-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14791500#comment-14791500
]
Walter Su commented on HDFS-9090:
---------------------------------
bq. Based on that, how about add one parameter, perhaps named "localityLevel"
to chooseTarget
HDFS-8390 try to do the same thing. But it doesn't worth to complicate default
policy if it's not a popular demand. 10 users have 10 special needs, we have
2^10 combinations. It's a disaster to put all of them in default policy.
> Write hot data on few nodes may cause performance issue
> -------------------------------------------------------
>
> Key: HDFS-9090
> URL: https://issues.apache.org/jira/browse/HDFS-9090
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 2.3.0
> Reporter: He Tianyi
> Assignee: He Tianyi
>
> (I am not sure whether this should be reported as BUG, feel free to modify
> this)
> Current block placement policy makes best effort to guarantee first replica
> on local node whenever possible.
> Consider the following scenario:
> 1. There are 500 datanodes across plenty of racks,
> 2. Raw user action log (just an example) are being written only on 10 nodes,
> which also have datanode deployed locally,
> 3. Then, before any balance, all these logs will have at least one replica in
> 10 nodes, implying one thirds data read on these log will be served by these
> 10 nodes if repl factor is 3, performance suffers.
> I propose to solve this scenario by introducing a configuration entry for
> client to disable arbitrary level of write locality.
> Then we can either (A) add local nodes to excludedNodes, or (B) tell NameNode
> the locality we prefer.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)