Yes, please show us the hadoop.log output and the Solr output. The latter is 
in this stage usually more important. You might write to not-existing fields or 
writing multiple values to a single valued field or... whatever's happening.

On Thursday 10 February 2011 00:36:21 McGibbney, Lewis John wrote:
> Hi list,
> 
> I am at Solr indexing stage and seem to have hit trouble when sending
> crawldb linkdb and segments/* to Solr to be indexed. I have added xml file
> to $CATALINA_HOME/cong/catalina/localhost with my webapp specifics. My
> Solr 1.4.1 implementation resides within my web app at following location
> /home/lewis/Downloads/mywebapp but when I send this command to index with
> Solr
> 
> lewis@lewis-01:~/Downloads/nutch-1.2$ bin/nutch solrindex
> http://127.0.0.1:8080/mywebapp crawl/crawldb crawl/linkdb crawl/segments/*
> 
> I am getting java.io.IOException: Job failed!
> 
> I had experienced this before when I was using the Solrindex command
> incorrectly, I am hoping that this is not the case, however, it is late
> and I might have missed something simple.
> 
> I have Hadoop.log if this would help at all.
> 
> Any suggestions please. Thanks
> 
> Lewis
> 
> Glasgow Caledonian University is a registered Scottish charity, number
> SC021474
> 
> Winner: Times Higher Education’s Widening Participation Initiative of the
> Year 2009 and Herald Society’s Education Initiative of the Year 2009.
> http://www.gcu.ac.uk/newsevents/news/bycategory/theuniversity/1/name,6219,
> en.html
> 
> Winner: Times Higher Education’s Outstanding Support for Early Career
> Researchers of the Year 2010, GCU as a lead with Universities Scotland
> partners.
> http://www.gcu.ac.uk/newsevents/news/bycategory/theuniversity/1/name,15691
> ,en.html

-- 
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350

Reply via email to