Hi Pat - The setting for hadoop.tmp.dir is used both locally and on HDFS and therefore should be consistent across your cluster.
http://stackoverflow.com/questions/2354525/what-should-be-hadoop-tmp-dir cheers, -James On Wed, May 23, 2012 at 3:44 PM, Pat Ferrel <p...@occamsmachete.com> wrote: > I have a two machine cluster and am adding a new machine. The new node has > a different location for hadoop.tmp.dir than the other two nodes and > refuses to start the datanode when started in the cluster. When I change > the location pointed to by hadoop.tmp.dir to be the same on all machines it > starts up fine on all machines. > > Shouldn't I be able to have the master and slave1 set as: > <property> > <name>hadoop.tmp.dir</name> > <value>/app/hadoop/tmp</value> > <description>A base for other temporary directories.</description> > </property> > > And slave2 set as: > <property> > <name>hadoop.tmp.dir</name> > <value>/media/d2/app/hadoop/**tmp</value> > <description>A base for other temporary directories.</description> > </property> > > ??? Slave2 runs standalone in single node mode just fine. Using 0.20.205. >