Can someone tell me how to resolve the following error message found in the
job tracker log file when trying to start map reduce.

grep FATAL *
hadoop-hadoop-jobtracker-hadoop-1.log:2009-05-04 16:35:14,176 FATAL
org.apache.hadoop.mapred.JobTracker: java.lang.IllegalArgumentException:
Wrong FS: hdfs://usr/local/hadoop-datastore/hadoop-hadoop/mapred/system,
expected: hdfs://localhost:54310



Here is my hadoop-site.xml as well


<configuration>

<property>
<name>hadoop.tmp.dir</name>
<value>//usr/local/hadoop-datastore/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
<property> <!--OH: this is to solve HADOOP-1212 bug that causes
"Incompatible na
mespaceIDs" in datanode log -->
<name>dfs.data.dir</name>
<value>/usr/local/hadoop-datastore/hadoop-${user.name}/dfs/data</value>
</property>
<!-- if incompatible problem persists, %rm -r
/usr/local/hadoop-datastore/hadoop
-hadoop/dfs/data from problematic datanode and reformat namenode -->
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description> The name of the default file system> A URI whose scheme and
author
ity determines the File System implementation> The uri's scheme determines
the config
 property (fs.SCHEME.impl) naming the File System implementation class.

The uri's authority is used to determine the host, port, etc. For a
filesystem.</desc
ription>
</property>

<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description> The host and port that the MapREduce job tracker runs at.  If
"local", 
then jobs are run in-process as a single map and reduce task. </description>
</property>
</configuration>

-- 
View this message in context: 
http://www.nabble.com/Wrong-FS-Exception-tp23376486p23376486.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Reply via email to