Yes, more details would be great...

Is this easily repeated?

The exists?=false is particularly spooky.

It means, somehow, a new segment was being flushed, containing 1285
docs, but then after closing the doc stores, the stored fields index
file (_X.fdx) had been deleted.

Can you turn on IndexWriter.setInfoStream, get this error to happen
again, and then post the output?  Thanks.

Mike

On Wed, Feb 10, 2010 at 12:59 AM, Lance Norskog <goks...@gmail.com> wrote:
> We need more information. How big is the index in disk space? How many
> documents? How many fields? What's the schema? What OS? What Java
> version?
>
> Do you run this on a local hard disk or is it over an NFS mount?
>
> Does this software commit before shutting down?
>
> If you run with asserts on do you get errors before this happens.
>    -ea:org.apache.lucene... as a JVM argument
>
> On Tue, Feb 9, 2010 at 5:08 PM, Acadaca <ph...@acadaca.com> wrote:
>>
>> We are using Solr 1.4 in a multi-core setup with replication.
>>
>> Whenever we write to the master we get the following exception:
>>
>> java.lang.RuntimeException: after flush: fdx size mismatch: 1285 docs vs 0
>> length in bytes of _gqg.fdx file exists?=false
>> at
>> org.apache.lucene.index.StoredFieldsWriter.closeDocStore(StoredFieldsWriter.java:97)
>> at
>> org.apache.lucene.index.DocFieldProcessor.closeDocStore(DocFieldProcessor.java:50)
>>
>> Has anyone had any success debugging this one?
>>
>> thx.
>> --
>> View this message in context: 
>> http://old.nabble.com/%22after-flush%3A-fdx-size-mismatch%22-on-query-durring-writes-tp27524755p27524755.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
>>
>
>
>
> --
> Lance Norskog
> goks...@gmail.com
>

Reply via email to