Hello,

In order to speed up crawling on our Sun Sparc server, we started up the
hadoop daemon and ran several map and reduce tasks. Along with increasing
the number number of fetcher threads, this greatly increased performance of
the crawl.

However, the crawl script we were using would then run solrindex and ever
since we started using pseudo-cluster mode we have been seeing
"org.apache.solr.common.SolrException: Lock obtain timed out: NativeFSLock"
messages. Why is this occuring? We are assuming that the solrindex is single
threaded and after one task starts solrindex, the other tasks also try but
fail because the lock is in place.

Is there a way to create a solr index with multiple map tasks or should we
stop the hadoop daemon and then run solrindex?

Thanks,
Steve Cohen

Reply via email to