Interesting...

Odd to see on 3.0 too, where the tests just run [slowly] sequentially.

It means somehow there was a BG thread running, and either closing or
opening readers, when this test was wrapping up.  The test doesn't
seem to use threads itself, and it closes the IW.  Hmm, though, CMS
does have a bug pre 3.1 whereby it doesn't in fact truly finish all of
its threads when you sync it... though these threads should not be
touching FieldCache as they wrap up.

Could it be the test had failed, and was throwing an exc, but this exc
masked it?

Mike

On Tue, Jan 18, 2011 at 9:19 AM, Shai Erera <[email protected]> wrote:
> Hi
>
> I ran tests on 3_0 branch and hit this:
>
>     [junit] Testcase:
> testRankByte(org.apache.lucene.search.function.TestFieldScoreQuery):
> Caused an ERROR
>     [junit] null
>     [junit] java.util.ConcurrentModificationException
>     [junit]     at
> java.util.WeakHashMap$HashIterator.next(WeakHashMap.java:169)
>     [junit]     at
> org.apache.lucene.search.FieldCacheImpl.getCacheEntries(FieldCacheImpl.java:75)
>     [junit]     at
> org.apache.lucene.util.LuceneTestCase.assertSaneFieldCaches(LuceneTestCase.java:133)
>     [junit]     at
> org.apache.lucene.util.LuceneTestCase.tearDown(LuceneTestCase.java:100)
>     [junit]     at
> org.apache.lucene.search.function.FunctionTestSetup.tearDown(FunctionTestSetup.java:86)
>     [junit]     at
> org.apache.lucene.util.LuceneTestCase.runBare(LuceneTestCase.java:216)
>
> I couldn't reproduce it the second time I ran the test (test only and all
> tests), and I don't know if it applies to 3x/trunk too. I can dig into it
> later, but sending to the list in case someone wants to look at it before.
>
> I see that the method is called from tearDown() and ConcurrentModEx suggests
> someone added to the set during while someone else iterated over it -- could
> it be that the tests step on each other somehow?
>
> Shai
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to