Hello Steven ,
You can use  dfs.datanode.du.reserved configuration value in 
$HADOOP_HOME/conf/hdfs-site.xml for limiting disk usage.

<property>
    <name>dfs.datanode.du.reserved</name>
    <!-- cluster variant -->
    <value>182400</value>
    <description>Reserved space in bytes per volume. Always leave this much 
space free for non dfs use.
  </description>
  </property>

Ravi
Hadoop @ Yahoo!

On 3/30/10 8:12 PM, "steven zhuang" <[email protected]> wrote:

hi, guys,
               we have some machine with 1T disk, some with 100GB disk,
               I have this question that is there any means we can limit the
disk usage of datanodes on those machines with smaller disk?
               thanks!


Ravi
--

Reply via email to