Hi - see the logs for more details.
Markus
 
-----Original message-----
> From:Muhamad Muchlis <[email protected]>
> Sent: Monday 3rd November 2014 9:15
> To: [email protected]
> Subject: [Error Crawling Job Failed] NUTCH 1.9
> 
> Hello.
> 
> I get an error message when I run the command:
> 
> *crawl seed/seed.txt crawl -depth 3 -topN 5*
> 
> 
> Error Message :
> 
> SOLRIndexWriter
> solr.server.url : URL of the SOLR instance (mandatory)
> solr.commit.size : buffer size when sending to SOLR (default 1000)
> solr.mapping.file : name of the mapping file for fields (default
> solrindex-mapping.xml)
> solr.auth : use authentication (default false)
> solr.auth.username : use authentication (default false)
> solr.auth : username for authentication
> solr.auth.password : password for authentication
> 
> 
> Indexer: java.io.IOException: Job failed!
> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
> at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:114)
> at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:176)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:186)
> 
> 
> Can anyone explain why this happened ?
> 
> 
> 
> 
> 
> Best regard's
> 
> M.Muchlis
> 

Reply via email to