[
https://issues.apache.org/jira/browse/SOLR-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ansgar Kapels updated SOLR-8331:
--------------------------------
Description:
While adding many new documents to Solr (via solrJ) it happens that the index
files get corrupted.
This problem is occurring on different virtual servers (all same OS and
configuration). Especially when adding many new (or updated) documents to Solr
in a short time.
Here's the exception from solr.log:
{code}
org.apache.solr.common.SolrException; auto commit
error...:org.apache.lucene.index.CorruptIndexException: codec header mismatch:
actual header=1970145651 vs expected header=1071082519 (resource:
BufferedChecksumIndexInput(MMapIndexInput(path="/data/solr/data1/index/_gru.fnm")))
at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:136)
at
org.apache.lucene.codecs.lucene46.Lucene46FieldInfosReader.read(Lucene46FieldInfosReader.java:57)
at
org.apache.lucene.index.SegmentReader.readFieldInfos(SegmentReader.java:289)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:107)
at
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at
org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:282)
at
org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3312)
at
org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3303)
at
org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2989)
at
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3134)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3101)
at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:582)
at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}
Additionally I found in Tomcat's catalina.out logfile :
{code}
Exception in thread "Lucene Merge Thread #21"
org.apache.lucene.index.MergePolicy$MergeException:
org.apache.lucene.index.CorruptIndexException: codec header mismatch: actual
header=33882629 vs expected header=1071082519 (resource:
BufferedChecksumIndexInput(MMapIndexInput(path="/data/solr/data1/index/_62.fnm")))
at
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:549)
at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:522)
Caused by: org.apache.lucene.index.CorruptIndexException: codec header
mismatch: actual header=33882629 vs expected header=1071082519 (resource:
BufferedChecksumIndexInput(MMapIndexInput(path="/data/solr/data1/index/_62.fnm")))
at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:136)
at
org.apache.lucene.codecs.lucene46.Lucene46FieldInfosReader.read(Lucene46FieldInfosReader.java:57)
at
org.apache.lucene.index.SegmentReader.readFieldInfos(SegmentReader.java:289)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:107)
at
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at
org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:282)
at org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:3987)
at org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:3949)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3802)
at
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:409)
at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:486)
{code}
(The file path is different as the second stack trace comes from another test)
I noticed that Solr runs stable much longer when I modify merge settings:
Reducing
<maxMergeDocs>100000</maxMergeDocs>
to
<maxMergeDocs>10000</maxMergeDocs>
has a positive effect but at some point the index still gets corrupted.
Same when setting a higher mergeFactor. So it seems I can delay the issue for a
while but certainly it will reach a critical point after a few days or weeks it
can't handle anymore. Maybe it is related to a certain file size or something?
The index's total size (data directory) is about 23 GB with 1,600,000 documents.
was:
While adding many new documents to Solr (via solrJ) it happens that the index
files get corrupted.
This problem is occurring on different virtual servers (all same OS and
configuration). Especially when adding many new (or updated) documents to Solr
in a short time.
Here's the exception:
{code}
org.apache.solr.common.SolrException; auto commit
error...:org.apache.lucene.index.CorruptIndexException: codec header mismatch:
actual header=1970145651 vs expected header=1071082519 (resource:
BufferedChecksumIndexInput(MMapIndexInput(path="/data/solr/data1/index/_gru.fnm")))
at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:136)
at
org.apache.lucene.codecs.lucene46.Lucene46FieldInfosReader.read(Lucene46FieldInfosReader.java:57)
at
org.apache.lucene.index.SegmentReader.readFieldInfos(SegmentReader.java:289)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:107)
at
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at
org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:282)
at
org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3312)
at
org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3303)
at
org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2989)
at
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3134)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3101)
at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:582)
at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}
I noticed that Solr runs stable much longer when I modify merge settings:
Reducing
<maxMergeDocs>100000</maxMergeDocs>
to
<maxMergeDocs>10000</maxMergeDocs>
has a positive effect but at some point the index still gets corrupted.
Same when setting a higher mergeFactor. So it seems I can delay the issue for a
while but certainly it will reach a critical point after a few days or weeks it
can't handle anymore. Maybe it is related to a certain file size or something?
The index's total size (data directory) is about 23 GB with 1,600,000 documents.
> CorruptIndexException after auto commit
> ---------------------------------------
>
> Key: SOLR-8331
> URL: https://issues.apache.org/jira/browse/SOLR-8331
> Project: Solr
> Issue Type: Bug
> Components: update
> Affects Versions: 4.10.4
> Environment: OS: SUSE Linux Enterprise Server 11 SP3
> File system: ext3
> Application server: Tomcat 7
> Reporter: Ansgar Kapels
> Priority: Critical
>
> While adding many new documents to Solr (via solrJ) it happens that the index
> files get corrupted.
> This problem is occurring on different virtual servers (all same OS and
> configuration). Especially when adding many new (or updated) documents to
> Solr in a short time.
> Here's the exception from solr.log:
> {code}
> org.apache.solr.common.SolrException; auto commit
> error...:org.apache.lucene.index.CorruptIndexException: codec header
> mismatch: actual header=1970145651 vs expected header=1071082519 (resource:
> BufferedChecksumIndexInput(MMapIndexInput(path="/data/solr/data1/index/_gru.fnm")))
> at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:136)
> at
> org.apache.lucene.codecs.lucene46.Lucene46FieldInfosReader.read(Lucene46FieldInfosReader.java:57)
> at
> org.apache.lucene.index.SegmentReader.readFieldInfos(SegmentReader.java:289)
> at
> org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:107)
> at
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
> at
> org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:282)
> at
> org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3312)
> at
> org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3303)
> at
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2989)
> at
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3134)
> at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3101)
> at
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:582)
> at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Additionally I found in Tomcat's catalina.out logfile :
> {code}
> Exception in thread "Lucene Merge Thread #21"
> org.apache.lucene.index.MergePolicy$MergeException:
> org.apache.lucene.index.CorruptIndexException: codec header mismatch: actual
> header=33882629 vs expected header=1071082519 (resource:
> BufferedChecksumIndexInput(MMapIndexInput(path="/data/solr/data1/index/_62.fnm")))
> at
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:549)
> at
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:522)
> Caused by: org.apache.lucene.index.CorruptIndexException: codec header
> mismatch: actual header=33882629 vs expected header=1071082519 (resource:
> BufferedChecksumIndexInput(MMapIndexInput(path="/data/solr/data1/index/_62.fnm")))
> at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:136)
> at
> org.apache.lucene.codecs.lucene46.Lucene46FieldInfosReader.read(Lucene46FieldInfosReader.java:57)
> at
> org.apache.lucene.index.SegmentReader.readFieldInfos(SegmentReader.java:289)
> at
> org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:107)
> at
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
> at
> org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:282)
> at
> org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:3987)
> at
> org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:3949)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3802)
> at
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:409)
> at
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:486)
> {code}
> (The file path is different as the second stack trace comes from another test)
> I noticed that Solr runs stable much longer when I modify merge settings:
> Reducing
> <maxMergeDocs>100000</maxMergeDocs>
> to
> <maxMergeDocs>10000</maxMergeDocs>
> has a positive effect but at some point the index still gets corrupted.
> Same when setting a higher mergeFactor. So it seems I can delay the issue for
> a while but certainly it will reach a critical point after a few days or
> weeks it can't handle anymore. Maybe it is related to a certain file size or
> something?
> The index's total size (data directory) is about 23 GB with 1,600,000
> documents.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]