[ 
https://issues.apache.org/jira/browse/HDFS-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13596859#comment-13596859
 ] 

John Meza commented on HDFS-1564:
---------------------------------

I think dfs.datanode.du.pct works well for heterogeneous disks, especially when 
the disks have a wide range of capacities.  When disk capacities are the same 
or very close dfs.datanode.du.pct or dfs.datanode.du.reserved would work.  

Neither solve my needs well. I have an 8 DN cluster used for performance 
testing. On occasion I need some or all of these machines for other tasks. It 
would be great if could reserve 300Gb on a couple of disks. Not all of the 
disks, just a couple. 

Maintaining  a comma-list can lead to mistakes, especially for DNs with more 
than a couple of disks. To simplify this identify reserve for specific disks, 
all others default to a value.
For example, a DN with 8 disks: /fs1,/fs2,../fs8.

<name>dfs.datanode.du.reserved</name>
   <value>10737418240, /fs1/dfs/dn:322122547200, 
/fs2/dfs/dn:322122547200</value>

This reserves /fs1=300Gb, /fs2=300Gb, all others default=10Gb

                
> Make dfs.datanode.du.reserved configurable per volume
> -----------------------------------------------------
>
>                 Key: HDFS-1564
>                 URL: https://issues.apache.org/jira/browse/HDFS-1564
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: datanode
>            Reporter: Aaron T. Myers
>            Priority: Minor
>
> In clusters with DNs which have heterogeneous data dir volumes, it would be 
> nice if dfs.datanode.du.reserved could be configured per-volume.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to