[ 
https://issues.apache.org/jira/browse/LUCENE-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14969567#comment-14969567
 ] 

David Smiley commented on LUCENE-6842:
--------------------------------------

FWIW one of my clients has ~94 thousand fields and it wasn't a problem.  It's 
some mid to late Solr 4.x version.  Solr's schema browser became unusable 
though ;-)

> No way to limit the fields cached in memory and leads to OOM when there are 
> thousand of fields (thousands)
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: LUCENE-6842
>                 URL: https://issues.apache.org/jira/browse/LUCENE-6842
>             Project: Lucene - Core
>          Issue Type: Bug
>          Components: core/search
>    Affects Versions: 4.6.1
>         Environment: Linux, openjdk 1.6.x
>            Reporter: Bala Kolla
>         Attachments: HistogramOfHeapUsage.png
>
>
> I am opening this defect to get some guidance on how to handle a case of 
> server running out of memory and it seems like it's something to do how we 
> index. But want to know if there is anyway to reduce the impact of this on 
> memory usage before we look into the way of reducing the number of fields. 
> Basically we have many thousands of fields being indexed and it's causing a 
> large amount of memory being used (25GB) and eventually leading to 
> application to hang and force us to restart every few minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to