Greetings,
So this one has me stumped a little bit. I am running a fairly simple
Nutch Crawl on our local intranet site or on our partners intranet
sites. Every now and then when doing a 'bin/nutch crawl urlfile -dir
webindex/ -depth 5' I get an exception of:
Optimizing index.
Indexer: done
Dedup: starting
Dedup: adding indexes in: /home/mvivion/webindex/target.com/indexes
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:
604)
at org.apache.nutch.indexer.DeleteDuplicates.dedup
(DeleteDuplicates.java:439)
at org.apache.nutch.crawl.Crawl.main(Crawl.java:135)
Has anyone see this before? Any solutions to resolve this crash?
Thanks!!!