Grant Ingersoll wrote:

 Is more than one thread adding documents to the index?

I don't believe so, but I am trying to reproduce. I've only seen it once, and don't have a lot of details, other than I noticed it was on a specific file (.fdt) and was wondering if that was a factor or not. That is, maybe Paul could reproduce it.

I think your exception differs from Paul's in an important way. Paul's exception means an entire segment (its CFS file) was deleted, which is very easily caused by accidentally allowing 2 writers on the index at once. But in your case, SegmentReader successfully opened the fnm file but then failed on the fdt, so, your segment wasn't deleted (at least not entirely). So I think something different caused your exception.

 Any changes to the defaults in IndexWriter?

It's the SolrIndexWriter.

OK.  But what does your solrconfig.xml look like?



After seeing that exception, does IndexReader.open also hit that exception (ie, is/was the index corrupt)? Or does it only happen with BG merges?

Not sure, unfortunately, I don't have a lot of info yet. The background exception happened during an optimize, if that matters at all

OK. It'd be very useful to know if index was really corrupt (missing that segment) vs BG merge incorrectly, temporarily, thought it was supposed to merge that segment.

Is this a largish index? Like, would there be so many segments that optimize would be running concurrent merges (> 2*mergeFactor segments)? With ConcurrentMergeScheduler, optimize is now able to run multiple merges concurrently, if the index has enough segments.

I'll run some stress tests, focusing on concurrency of merges during optimize...

Which OS & JRE are you using?

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to