[ 
https://issues.apache.org/jira/browse/HDFS-15278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17087368#comment-17087368
 ] 

Mingliang Liu commented on HDFS-15278:
--------------------------------------

This is interesting use case and feature proposal. I have not reviewed this 
carefully, but it seems like an anti-locality block placement settings to avoid 
overloading co-located data node for writing. Ideally this feature could be 
tunable per-file instead of block placement policy wide. So, could your use 
case be satisfied if you enable {{CreateFlag::IGNORE_CLIENT_LOCALITY}}? See 
related discussion on HDFS-13739. CC: [~ayushtkn] and [~harisekhon].

Two minor comments:
# {{sequentialBlocksDispersed}} should be final since it is. Otherwise 
accessing it unsynchronized may get error.
# Does it have to be {{setReplication}} to trigger this? What if we create a 
new file with replica factor 1?


> After execute ‘-setrep 1’, make sure that blocks of the file are dispersed 
> across different datanodes
> -----------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-15278
>                 URL: https://issues.apache.org/jira/browse/HDFS-15278
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Yang Yun
>            Assignee: Yang Yun
>            Priority: Minor
>         Attachments: HDFS-15278.001.patch, HDFS-15278.002.patch
>
>
> After execute ‘-setrep 1’, many of blocks of the file may locate on same 
> machine. Especially the file is written on one datanode machine. That causes 
> data hot spots and is hard to fix if this machine is down.
> Add a chosen history to make sure that blocks of the file are dispersed 
> across different datanodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to