Here is the wiki entry.

http://wiki.apache.org/nutch/RunningNutchAndSolr

Using solr was working fine when  mapred.job.tracker was set to local when
we crawl and run solrindex but when we setmapred.job.tracker to
localhost:9001 and use the hadoop task daemons, solrindex gives us many
errors saying:

org.apache.solr.common.SolrException: Lock obtain timed out: NativeFSLock

Presumably, this is because only one process can make a lock and the rest of
the child processes just error out.

Is there a way to index using multiple processes?

Thanks,
Steve


On Mon, Oct 4, 2010 at 7:22 AM, Thumuluri, Sai <
[email protected]> wrote:

> Not very clear on the question - but we do use Nutch to crawl and Solr
> to index.
>
> -----Original Message-----
> From: Israel [mailto:[email protected]]
> Sent: Monday, October 04, 2010 12:02 AM
> To: [email protected]
> Subject: About SOLR and Nutch
>
> * Hi, anyone know if integrating SOLR to Nuth, the search can be
> improved,
> if offered options how search suggestions, etc?
>
> * I must integrate SOLR, or nutch already does this?
>

Reply via email to