I haven't tried yet, but I've seen this: <property> <name>dfs.datanode.du.reserved</name> <value>0</value> <description>Reserved space in bytes per volume. Always leave this much space free for non dfs use. </description> </property> or <property> <name>dfs.datanode.du.pct</name> <value>0.98f</value> <description>When calculating remaining space, only use this percentage of the real available space </description> </property>
In: conf/hadoop-site.xml On 09/01/2008, S. Nunes <[EMAIL PROTECTED]> wrote: > > Hi, > > I'm trying to install hadoop on a set of computers that are not > exclusively dedicated to run hadoop. > Our goal is to use these computers in the hadoop cluster when they are > inactive (during night). > > I would like to know if it is possible to limit the space used by > hadoop at a slave node. > Something like "hadoop.tmp.dir.max". I do not want hadoop to use all > the available disk space. > > Thanks in advance for any help on this issue, > > -- > Sérgio Nunes > -