Hi there,

I'm having problems running the latest release of nutch. I get the following
error when I try to crawl:

Fetcher: segment: crawl/segments/20080109183955
Fetcher: java.io.IOException: Target
/tmp/hadoop-me/mapred/local/localRunner/job_local_1.xml already exists
        at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:246)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:125)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:116)
        at org.apache.hadoop.fs.LocalFileSystem.copyToLocalFile(
LocalFileSystem.java:55)
        at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java
:834)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(
LocalJobRunner.java:86)
        at org.apache.hadoop.mapred.LocalJobRunner.submitJob(
LocalJobRunner.java:281)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:558)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:753)
        at org.apache.nutch.fetcher.Fetcher.fetch(Fetcher.java:526)
        at org.apache.nutch.fetcher.Fetcher.run(Fetcher.java:561)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolBase.doMain(ToolBase.java:54)
        at org.apache.nutch.fetcher.Fetcher.main(Fetcher.java:533)

If I manually remove the offending directory it works... sometimes.

Any help is appreciated.

Regards,
IWan

Reply via email to