[ 
https://issues.apache.org/jira/browse/LUCENE-4398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-4398:
---------------------------------------

    Attachment: LUCENE-4398.patch

Patch w/ test case and fix.

The issue doesn't affect 4.x/5.x because on flush we completely clear the slate 
/ allocate a new DWPT.  I'll commit the test case to be sure...

I added a InvertedDocConsumerPerField.close() method, and implemented it in 
TermsHashPerField to account for freeing up the RAM.
                
> "Memory Leak" in TermsHashPerField memory tracking
> --------------------------------------------------
>
>                 Key: LUCENE-4398
>                 URL: https://issues.apache.org/jira/browse/LUCENE-4398
>             Project: Lucene - Core
>          Issue Type: Bug
>    Affects Versions: 3.4
>            Reporter: Tim Smith
>            Assignee: Michael McCandless
>         Attachments: LUCENE-4398.patch
>
>
> I am witnessing an apparent leak in the memory tracking used to determine 
> when a flush is necessary.
> Over time, this will result in every single document being flushed into its 
> own segment as the memUsage will remain above the configured buffer size, 
> causing a flush to be triggered after every add/update.
> Best I can figure, this is being caused by TermsHashPerField's tracking of 
> memory usage for postingsHash and/or postingsArray combined with 
> multi-threaded feeding.
> I suspect that the TermsHashPerField's postingsHash is growing in one thread, 
> then, when a segment is flushed, a single, different thread will merge all 
> TermsHashPerFields in FreqProxTermsWriter and then call shrinkHash(). I 
> suspect this call of shrinkHash() is seeing an old postingsHash array, and 
> subsequently not releasing all the memory that was allocated.
> If this is the case, I am also concerned that FreqProxTermsWriter will not 
> write the correct terms into the index, although I have not confirmed that 
> any indexing problem occurs as of yet.
> NOTE: i am witnessing this growth in a test by subtracting the amount or 
> memory allocated (but in a "free" state) by 
> perDocAllocator/byteBlockAllocator/charBlocks/intBlocks from 
> DocumentsWriter.memUsage.get() in IndexWriter.doAfterFlush()
> I will see this stay at a stable point for a while, then on some flushes, i 
> will see this grow by a couple of bytes, and all subsequent flushes will 
> never go back down the the previous state
> I will continue to investigate and post any additional findings

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to