[
https://issues.apache.org/jira/browse/LUCENE-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14963098#comment-14963098
]
Michael McCandless commented on LUCENE-6842:
--------------------------------------------
Given how much heap is used by FST and FST$Arc I think you likely do have many,
many unique indexed fields?
Each unique indexed field in Lucene requires a few hundred bytes of heap, and
while there's been some improvement recently to lower that, it's only by a bit.
You'll have to re-work how you use Lucene to not require so many unique fields
...
> No way to limit the fields cached in memory and leads to OOM when there are
> thousand of fields (thousands)
> ----------------------------------------------------------------------------------------------------------
>
> Key: LUCENE-6842
> URL: https://issues.apache.org/jira/browse/LUCENE-6842
> Project: Lucene - Core
> Issue Type: Bug
> Components: core/search
> Affects Versions: 4.6.1
> Environment: Linux, openjdk 1.6.x
> Reporter: Bala Kolla
> Attachments: HistogramOfHeapUsage.png
>
>
> I am opening this defect to get some guidance on how to handle a case of
> server running out of memory and it seems like it's something to do how we
> index. But want to know if there is anyway to reduce the impact of this on
> memory usage before we look into the way of reducing the number of fields.
> Basically we have many thousands of fields being indexed and it's causing a
> large amount of memory being used (25GB) and eventually leading to
> application to hang and force us to restart every few minutes.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]