[ 
https://issues.apache.org/jira/browse/LUCENE-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12698273#action_12698273
 ] 

Shai Erera commented on LUCENE-1593:
------------------------------------

This sounds like it should work, but since I'm not fully familiar with that 
code, I can't back you up :). I guess a test case will clarify it. In the 
meantime, I read BooleanScorer and BooleanScorer2 code, and came across several 
possible optimizations:
* BS.score() and score(HC) check if coordFactors is null on every call (this 
actually is problematic for score() only). I think we can init coordFactors in 
the ctor, as well as after every call to add(Scorer)? That is not called during 
query execution, but score() is.
* Same for BS2.score() and score(HC) - initCountingSumScorer?
* Cleanup BS.add() code a bit. For example, it checks 'if (prohibited)', does 
something and then 'if (!prohibited)'. Maybe merge all the 
required/prohibited/both cases together?
* BS2 declares 'defaultSimilarity' and instantiates it to new 
DefaultSimilarity(). Two things here: (1) can the field be defined final (2) 
use Similarity.getDefault()?
* BS2.SingleMatchScorer's score() method looks a bit suspicious. It checks if 
doc() >= lastScoredDoc and if so updates lastScoredDoc and increments 
coordinator.nrMatchers. It then calls scorer.score() regardless. So it looks as 
if this method is expected to be called for the same doc several times. When 
1575 is committed, I think we should wrap the input scorer with the new 
ScoreCachingWrappingScorer so that the actual score will not be computed over 
and over? Also, I think doc() needs to be saved in a local variable.

> Optimizations to TopScoreDocCollector and TopFieldCollector
> -----------------------------------------------------------
>
>                 Key: LUCENE-1593
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1593
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>            Reporter: Shai Erera
>             Fix For: 2.9
>
>
> This is a spin-off of LUCENE-1575 and proposes to optimize TSDC and TFC code 
> to remove unnecessary checks. The plan is:
> # Ensure that IndexSearcher returns segements in increasing doc Id order, 
> instead of numDocs().
> # Change TSDC and TFC's code to not use the doc id as a tie breaker. New docs 
> will always have larger ids and therefore cannot compete.
> # Pre-populate HitQueue with sentinel values in TSDC (score = Float.NEG_INF) 
> and remove the check if reusableSD == null.
> # Also move to use "changing top" and then call adjustTop(), in case we 
> update the queue.
> # some methods in Sort explicitly add SortField.FIELD_DOC as a "tie breaker" 
> for the last SortField. But, doing so should not be necessary (since we 
> already break ties by docID), and is in fact less efficient (once the above 
> optimization is in).
> # Investigate PQ - can we deprecate insert() and have only 
> insertWithOverflow()? Add a addDummyObjects method which will populate the 
> queue without "arranging" it, just store the objects in the array (this can 
> be used to pre-populate sentinel values)?
> I will post a patch as well as some perf measurements as soon as I have them.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org

Reply via email to