[
https://issues.apache.org/jira/browse/LUCENE-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12794819#action_12794819
]
Michael McCandless commented on LUCENE-2120:
--------------------------------------------
Thanks for running that test John. You should probably add something
inside the if bodies, eg increment a count, to make sure the compiler
doesn't optimize it away. (And print the count in the end to make
sure it's identical for the two methods).
I had done a similar test in LUCENE-1476, where I hacked
SegmentTermDocs to represent deleted docs as list-of-int-docIDs, and
then "iterate" through them, but found only a tiny perf. gain at <= 1%
deleted docs. Not sure why I didn't see a bigger gain... I would
expect at low rates to see gains.
And the perf. loss at even moderate deletion rates is what inspired
opening LUCENE-1536, to explore applying filters the same way we apply
deleted docs (ie, switch from iterator to random access).
> Possible file handle leak in near real-time reader
> --------------------------------------------------
>
> Key: LUCENE-2120
> URL: https://issues.apache.org/jira/browse/LUCENE-2120
> Project: Lucene - Java
> Issue Type: Bug
> Components: Index
> Affects Versions: 3.1
> Reporter: Michael McCandless
> Assignee: Michael McCandless
> Fix For: 3.1
>
>
> Spinoff of LUCENE-1526: Jake/John hit file descriptor exhaustion when testing
> NRT.
> I've tried to repro this, stress testing NRT, saturating reopens, indexing,
> searching, but haven't found any issue.
> Let's try to get to the bottom of it, here...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]