The solrdedup job completes without failure, it is the solrindex job that's 
actually failing. See your hadoop.log and check Solr's output.

On Tuesday 19 July 2011 17:23:51 Kelvin wrote:
> Sorry for the multiple postings. I am trying out nutch 1.3, which requires
> solr for indexing
> 
> I try to crawl and index with solr with this simple command
> bin/nutch crawl urls -solr http://localhost:8983/solr/ -depth 3 -topN 10
> 
> But why does it gives me the following error? Thank you for your kind help
> 
> 
> SolrIndexer: starting at 2011-07-19 23:13:31
> java.io.IOException: Job failed!
> SolrDeleteDuplicates: starting at 2011-07-19 23:13:33
> SolrDeleteDuplicates: Solr url: http://localhost:8983/solr/
> SolrDeleteDuplicates: finished at 2011-07-19 23:13:34, elapsed: 00:00:01
> crawl finished: crawl-20110719231304

-- 
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350

Reply via email to