[
https://issues.apache.org/jira/browse/LUCENE-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14962951#comment-14962951
]
Uwe Schindler commented on LUCENE-6842:
---------------------------------------
Please also send stack trace showing the OOMs. It is impossible to see any
problems without more information (code, number of fields, total index size,
does it happen during indexing or query,...).
The histogram you sent looks fine, although the number of HashMap$Entry
instances or TreeMap$Entry instances cannot come from Lucene unless you haven
*millions* of fields.
Please note that if the OOMs happen during merging of index segments, you
should upgrade Lucene, because before 5.0, merging was using much more heap,
especially if you have many fields.
> No way to limit the fields cached in memory and leads to OOM when there are
> thousand of fields (thousands)
> ----------------------------------------------------------------------------------------------------------
>
> Key: LUCENE-6842
> URL: https://issues.apache.org/jira/browse/LUCENE-6842
> Project: Lucene - Core
> Issue Type: Bug
> Components: core/search
> Affects Versions: 4.6.1
> Environment: Linux, openjdk 1.6.x
> Reporter: Bala Kolla
> Attachments: HistogramOfHeapUsage.png
>
>
> I am opening this defect to get some guidance on how to handle a case of
> server running out of memory and it seems like it's something to do how we
> index. But want to know if there is anyway to reduce the impact of this on
> memory usage before we look into the way of reducing the number of fields.
> Basically we have many thousands of fields being indexed and it's causing a
> large amount of memory being used (25GB) and eventually leading to
> application to hang and force us to restart every few minutes.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]