Hi there,

Just wondering is anyone has encountered the following error. I'm using the
latest nightly build of nutch: nutch-2008-01-05_05-34-48 under mandrake
linux

I delete the directory and sometimes it works, sometimes it doesn't.

I've enabled the httpclient-protocol plugin in nutch-site.xml... trying to
crawl/index a https page

---------------------------------------------------------------------------------
Exception in thread "main" java.io.IOException: Target
/tmp/hadoop-cinj/mapred/local/localRunner/job_local_1.xml already exists
        at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:246)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:125)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:116)
        at org.apache.hadoop.fs.LocalFileSystem.copyToLocalFile(
LocalFileSystem.java:55)
        at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java
:834)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(
LocalJobRunner.java:86)
        at org.apache.hadoop.mapred.LocalJobRunner.submitJob(
LocalJobRunner.java:281)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:558)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:753)
        at org.apache.nutch.crawl.Generator.generate(Generator.java:469)
        at org.apache.nutch.crawl.Crawl.main(Crawl.java:118)

Reply via email to