OK I've opened:
https://issues.apache.org/jira/browse/LUCENE-1323
I'll commit the fix (to trunk, to be included in 2.4) soon.
Mike
Michael McCandless wrote:
Aha! OK now I see how that led to your exception.
When you create a MultiReader, passing in the array of IndexReaders,
MultiR
Aha! OK now I see how that led to your exception.
When you create a MultiReader, passing in the array of IndexReaders,
MultiReader simply holds onto your array. It also computes & caches
norms() the first time its called, based on the total # docs it sees
in all the readers in that array
Yes I am using IndexReader.reopen(). Here is my code doing this:
public void refreshIndeces() throws CorruptIndexException,
IOException {
if ((System.currentTimeMillis() - this.lastRefresh) >
this.REFRESH_PERIOD) {
this.lastRefresh = System.currentTimeMillis();
That's interesting. So you are using IndexReader.reopen() to get a
new reader? Are you closing the previous reader?
The exception goes away if you create a new IndexSearcher on the
reopened IndexReader?
I don't yet see how that could explain the exception, though. If you
reopen() the
By "does not help" do you mean CheckIndex never detects this
corruption, yet you then hit that exception when searching?
By "reopening fails" what do you mean? I thought reopen works fine,
but then it's only the search that fails?
Mike
Sascha Fahl wrote:
Checking the index after adding
OK thanks for the answers below.
One thing to realize is, with this specific corruption, you will only
hit the exception if the one term that has the corruption is queried
on. Ie, only a certain term in a query will hit the corruption.
That's great news that it's easily reproduced -- can
This is spooky: that exception means you have some sort of index
corruption. The TermScorer thinks it found a doc ID 37389, which is
out of bounds.
Reopening IndexReader while IndexWriter is writing should be
completely fine.
Is this easily reproduced? If so, if you could narrow it do
Hi,
I see some strange behavoiur of lucene. The following scenario.
While adding documents to my index (every doc is pretty small, doc-
count is about 12000) I have implemented a custom behaviour of
flushing and committing documents to the index. Before adding
documents to the index I check