Robert Muir created LUCENE-5646:
-----------------------------------
Summary: stored fields bulk merging doesn't quite work right
Key: LUCENE-5646
URL: https://issues.apache.org/jira/browse/LUCENE-5646
Project: Lucene - Core
Issue Type: Bug
Reporter: Robert Muir
Fix For: 4.9, 5.0
from doing some profiling of merging:
CompressingStoredFieldsWriter has 3 codepaths (as i see it):
1. optimized bulk copy (no deletions in chunk). In this case compressed data is
copied over.
2. semi-optimized copy: in this case its optimized for an existing
storedfieldswriter, and it decompresses and recompresses doc-at-a-time around
any deleted docs in the chunk.
3. ordinary merging
In my dataset, i only see #2 happening, never #1. The logic for determining if
we can do #1 seems to be:
{code}
onChunkBoundary && chunkSmallEnough && chunkLargeEnough && noDeletions
{code}
I think the logic for "chunkLargeEnough" is out of sync with the
MAX_DOCS_PER_CHUNK limit? e.g. instead of:
{code}
startOffsets[it.chunkDocs - 1] + it.lengths[it.chunkDocs - 1] >= chunkSize //
chunk is large enough
{code}
it should be something like:
{code}
(it.chunkDocs >= MAX_DOCUMENTS_PER_CHUNK || startOffsets[it.chunkDocs - 1] +
it.lengths[it.chunkDocs - 1] >= chunkSize) // chunk is large enough
{code}
But this only works "at first" then falls out of sync in my tests. Once this
happens, it never reverts back to #1 algorithm and sticks with #2. So its still
not quite right.
Maybe [~jpountz] knows off the top of his head...
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]