[ 
https://issues.apache.org/jira/browse/LUCENE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16536456#comment-16536456
 ] 

changchun huang edited comment on LUCENE-8389 at 7/8/18 11:46 PM:
------------------------------------------------------------------

Thanks for quickly reply.

Definitely I am not talking about the JAVA Heap.

When we were triggering background re-index from Jira, we can see during the 
re-indexing, the physical memory was reserved, I think it should be caused by 
the Lucene.  we have 16 Heap, 64 Physical Memory allocated to the server. we 
could see the all Physical memory got reserved during the re-indexing(Jira 
background re-index, single thread).

The problem is, we could not even set memory limit only for Lucene as the 
typical situation is, Lucence is not a standalone application, and it is 
embedded as JAVA application, so in a heavy load JAVA Application server which 
really care about performance and downtime, re-index with only 1 singe thread 
still reserves all free physical memory left, and this has conflicts with JAVA 
application even we configure the same Xms and Xmx.

So, I am asking a help like workaround, suggestion . We have JAVA 1.8 with 
G1GC, there is no OOME, but during re-index, the chance of (GC pause (G1 
Evacuation Pause) (young) (to-space exhausted) increased a lot. During that 
time, we were having performance issue.


was (Author: changchun):
Thanks for quickly reply.

Definitely I am not talking about the JAVA Heap.

When we were triggering background re-index from Jira, we can see during the 
re-indexing, the physical memory was reserved by the Lucene. 16 Heap, 64 
Physical Memory allocated. we could see the all Physical memory got reserved 
during the re-indexing(Jira background re-index, single thread).

The problem is, we could not even set memory limit only for Lucene as the 
typical situation is, Lucence is not a standalone application, and it is 
embedded as JAVA application, so in a heavy load JAVA Application server which 
really care about performance and downtime, re-index with only 1 singe thread 
still reserves all free physical memory left, and this has conflicts with JAVA 
application even we configure the same Xms and Xmx.

So, I am asking a help like workaround, suggestion . We have JAVA 1.8 with 
G1GC, there is no OOME, but during re-index, the chance of (GC pause (G1 
Evacuation Pause) (young) (to-space exhausted) increased a lot. During that 
time, we were having performance issue.

> Could not limit Lucene's memory consumption
> -------------------------------------------
>
>                 Key: LUCENE-8389
>                 URL: https://issues.apache.org/jira/browse/LUCENE-8389
>             Project: Lucene - Core
>          Issue Type: Bug
>          Components: core/index
>    Affects Versions: 3.3
>         Environment: |Java Version|1.8.0_102|
> |Operating System|Linux 3.12.48-52.27-default|
> |Application Server Container|Apache Tomcat/8.5.6|
> |atabase JNDI address|mysql 
> jdbc:mysql://mo-15e744225:3306/jira?useUnicode=true&characterEncoding=UTF8&sessionVariables=default_storage_engine=InnoDB|
> |Database version|5.6.27|
> |abase driver|MySQL Connector Java mysql-connector-java-5.1.34 ( Revision: 
> [email protected] )|
> |Version|7.6.1|
>            Reporter: changchun huang
>            Assignee: Uwe Schindler
>            Priority: Major
>
> We are running Jira 7.6.1 with Lucene 3.3 on SLES 12 SP1
> We configured 16GB Jira heap on 64GB server
> However, each time, when we run background re-index, the memory will be used 
> out by Lucene and we could not only limit its memory consumption.
> This definitely will cause overall performance issue on a system with heavy 
> load.
> We have around 500 concurrent users, 400K issues.
> Could you please help to advice if there were workaround  or fix for this?
> Thanks.
>  
> BTW: I did check a lot and found a blog introducing the new behavior of 
> Lucene 3.3
> [http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to