[ 
https://issues.apache.org/jira/browse/HDFS-12270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-12270:
---------------------------------

    Assignee: Kihwal Lee

> Allow more spreading of replicas during block placement
> -------------------------------------------------------
>
>                 Key: HDFS-12270
>                 URL: https://issues.apache.org/jira/browse/HDFS-12270
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: block placement
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>
> The default block placement places the first replica locally if possible, 
> then on a node in a remote rack, and finally another node in the remote rack. 
> If more than 3 replicas are requested, the rest are spread across available 
> racks.  This strategy was chosen to minimize the inter-rack traffic and be 
> able to tolerate a rack-level failure such as switch outages.
> This can tolerate a single rack failure, but if there also is a node outage 
> (double failure), having missing blocks is highly likely. Although network 
> bandwidth is still limited resource, it is less so than in the past. Some 
> users might want increased data availability at the price of increased 
> inter-rack traffic.  
> This can be achieved by using the upgrade domain feature, but a simple tweak 
> in the default policy can enable this, in case one does not want to go with 
> the upgrade domain.
> I propose introducing a new config to control this.
> Rack placement level 0: default. Current behavior.
> Rack placement level 1: Use minimum 3 racks, if available. Allow existing 
> blocks to remain as is.
> Rack placement level 2: Use minimum 3 racks, if available. Apply this policy 
> to all replication verification. (e.g. replication queue initialization)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to