Owen O'Malley-2 wrote:
> 
> The problem is that you haven't configured your map/reduce system  
> directory. The default works for single node systems, but not for  
> "real" clusters. I like to use:
> 
> <property>
>    <name>mapred.system.dir</name>
>    <value>/hadoop/mapred/system</value>
>    <description>The shared directory where MapReduce stores control  
> files.
>    </description>
> </property>
> 
> Note that this directory is in your default file system and must be  
> accessible from both the client and server machines and is typically  
> in HDFS. I've added a slight extension on HADOOP-1100 to have the  
> system directory passed back from the job tracker to the client.
> 

Is "hadoop.tmp.dir" the same property with "mapred.system.dir"
(i.e. this directory is in your default file system and must be  
accessible from both the client and server machines and is typically  
in HDFS) for the clustering?

which directories should be in the public shared directories, and which
ones could be in the local default file system in the case of clustering?
"hadoop-default.xml" is a typical example for the single node, any typical
example for the clustering case?

ChaoChun 

-- 
View this message in context: 
http://www.nabble.com/Problem-submitting-a-job-with-hadoop-0.14.0-tf4318087.html#a12305547
Sent from the Hadoop Users mailing list archive at Nabble.com.

Reply via email to