[ 
https://issues.apache.org/jira/browse/HDFS-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10967:
-----------------------------
    Attachment: HDFS-10967.03.patch

Attaching v3 patch to include remote writer and 
{{BlockPlacementPolicyRackFaultTolerant}} scenarios as [~mingma] suggested 
above.

When writing v3 patch I realized an issue in v2 patch that if a DN is 
considered in a random try, it will be put into excluded nodes list. v3 patch 
addresses that issue as well.

> Add configuration for BlockPlacementPolicy to avoid near-full DataNodes
> -----------------------------------------------------------------------
>
>                 Key: HDFS-10967
>                 URL: https://issues.apache.org/jira/browse/HDFS-10967
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Zhe Zhang
>            Assignee: Zhe Zhang
>              Labels: balancer
>         Attachments: HDFS-10967.00.patch, HDFS-10967.01.patch, 
> HDFS-10967.02.patch, HDFS-10967.03.patch
>
>
> Large production clusters are likely to have heterogeneous nodes in terms of 
> storage capacity, memory, and CPU cores. It is not always possible to 
> proportionally ingest data into DataNodes based on their remaining storage 
> capacity. Therefore it's possible for a subset of DataNodes to be much closer 
> to full capacity than the rest.
> This heterogeneity is most likely rack-by-rack -- i.e. _m_ whole racks of 
> low-storage nodes and _n_ whole racks of high-storage nodes. So It'd be very 
> useful if we can lower the chance for those near-full DataNodes to become 
> destinations for the 2nd and 3rd replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to