Hello there, Howdy. I've seen in the past that mapred.system.dir needs to be a directory shared across all the slaves else I get a "No such file or directory" exception. I'm adding few more slaves to the hadoop cluster and I cannot have a shared directory across these nodes.
Is there any way that I could set this one to get the system working. I used to get the following exception and none of the maps on the slaves work even though I have the directory created on each of the slaves. Only tasks on Master is successful. Error initializing task_0001_m_000000_0: java.io.IOException: /sv/lucene/tmp/hadoop-svdev/mapred/system/submit_3iroqt/job.xml: No such file or directory at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:110) at org.apache.hadoop.fs.LocalFileSystem.copyToLocalFile(LocalFileSystem.java :350) at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:869) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:331) at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:828) at org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:511) at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:857) at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:1499) Any help is greatly appreciated. ENV: Hadoop 0.11.2 on JDK 1.6.0 -- Thanks, Venkatesh "Perfection (in design) is achieved not when there is nothing more to add, but rather when there is nothing more to take away." - Antoine de Saint-Exupéry
