[ 
https://issues.apache.org/jira/browse/LUCENE-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990352#comment-13990352
 ] 

Adrien Grand commented on LUCENE-5646:
--------------------------------------

Indeed, code path #1 almost only happens on the first segment because it is 
unlikely that segments end on a chunk boundary. I'm +1 to removing it.

bq. NOTE: I didnt not do any similar inspection yet of term vectors. But IIRC 
that one has less fancy stuff in bulk merge.

Actually, they are worse: term vectors only have #1 and #3. So I think we 
should just use the default merge routine (which is what has been happening in 
practice in most cases anyway).


> stored fields bulk merging doesn't quite work right
> ---------------------------------------------------
>
>                 Key: LUCENE-5646
>                 URL: https://issues.apache.org/jira/browse/LUCENE-5646
>             Project: Lucene - Core
>          Issue Type: Bug
>            Reporter: Robert Muir
>             Fix For: 4.9, 5.0
>
>         Attachments: LUCENE-5646.patch
>
>
> from doing some profiling of merging:
> CompressingStoredFieldsWriter has 3 codepaths (as i see it):
> 1. optimized bulk copy (no deletions in chunk). In this case compressed data 
> is copied over.
> 2. semi-optimized copy: in this case its optimized for an existing 
> storedfieldswriter, and it decompresses and recompresses doc-at-a-time around 
> any deleted docs in the chunk.
> 3. ordinary merging
> In my dataset, i only see #2 happening, never #1. The logic for determining 
> if we can do #1 seems to be:
> {code}
> onChunkBoundary && chunkSmallEnough && chunkLargeEnough && noDeletions
> {code}
> I think the logic for "chunkLargeEnough" is out of sync with the 
> MAX_DOCS_PER_CHUNK limit? e.g. instead of:
> {code}
> startOffsets[it.chunkDocs - 1] + it.lengths[it.chunkDocs - 1] >= chunkSize // 
> chunk is large enough
> {code}
> it should be something like:
> {code}
> (it.chunkDocs >= MAX_DOCUMENTS_PER_CHUNK || startOffsets[it.chunkDocs - 1] + 
> it.lengths[it.chunkDocs - 1] >= chunkSize) // chunk is large enough
> {code}
> But this only works "at first" then falls out of sync in my tests. Once this 
> happens, it never reverts back to #1 algorithm and sticks with #2. So its 
> still not quite right.
> Maybe [~jpountz] knows off the top of his head...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to