Hi Adrien,


I didn't directly open readers. It is controlled by searcher manager.







---- On Mon, 03 Jun 2019 16:32:06 +0800 Adrien Grand <jpou...@gmail.com> wrote 
----



It looks like you are leaking readers. 
 
On Mon, Jun 3, 2019 at 9:46 AM alex stark <mailto:alex.st...@zoho.com.invalid> 
wrote: 
> 
> Hi experts, 
> 
> 
> 
> I recently have memory issues on Lucene. By checking heap dump, most of them 
> are occupied by SegmentCoreReaders.coreClosedListeners which is about nearly 
> half of all. 
> 
> 
> 
> 
> 
> ========Dominator Tree==================== 
> 
> num  retain size(bytes)  percent  percent(live) class Name 
> 
> ---------------------------------------- 
> 
> 0       14,024,859,136   21.76%   28.23%   
> com.elesearch.activity.core.engine.lucene.LuceneIndex 
> 
> | 
> 
> 10,259,490,504   15.92%   20.65%     
> --org.apache.lucene.index.SegmentCoreReaders 
> 
> | 
> 
> 10,258,783,280   15.92%   20.65%       --[field coreClosedListeners] 
> java.util.Collections$SynchronizedSet 
> 
> | 
> 
> 10,258,783,248   15.92%   20.65%         --[field c] java.util.LinkedHashSet 
> 
> | 
> 
> 10,258,783,224   15.92%   20.65%           --[field map] 
> java.util.LinkedHashMap 
> 
> ---------------------------------------- 
> 
> 1       11,865,993,448   18.41%   23.89%   
> com.elesearch.activity.core.engine.lucene.LuceneIndex 
> 
> ---------------------------------------- 
> 
> 2       11,815,171,240   18.33%   23.79%   
> com.elesearch.activity.core.engine.lucene.LuceneIndex 
> 
> ---------------------------------------- 
> 
> 3        6,504,382,648   10.09%   13.09%   
> com.elesearch.activity.core.engine.lucene.LuceneIndex 
> 
> | 
> 
> 5,050,933,760   7.84%   10.17%     
> --org.apache.lucene.index.SegmentCoreReaders 
> 
> | 
> 
> 5,050,256,008   7.84%   10.17%       --[field coreClosedListeners] 
> java.util.Collections$SynchronizedSet 
> 
> | 
> 
> 5,050,255,976   7.84%   10.17%         --[field c] java.util.LinkedHashSet 
> 
> | 
> 
> 5,050,255,952   7.84%   10.17%           --[field map] 
> java.util.LinkedHashMap 
> 
> ---------------------------------------- 
> 
> 4        2,798,684,240   4.34%   5.63%   
> com.elesearch.activity.core.engine.lucene.LuceneIndex 
> 
> 
> 
> ========thread stack==================== 
> 
> 
> 
> ========histogram==================== 
> 
> num     instances      #bytes      percent  class Name 
> 
> ---------------------------------------- 
> 
> 0         497,527  38,955,989,888  60.44%  long[] 
> 
> 1      18,489,470   7,355,741,784  11.41%  short[] 
> 
> 2      18,680,799   3,903,937,088  6.06%  byte[] 
> 
> 3      35,643,993   3,775,822,640  5.86%  char[] 
> 
> 4       4,017,462   1,851,518,792  2.87%  int[] 
> 
> 5       7,788,280     962,103,784  1.49%  java.lang.Object[] 
> 
> 6       5,256,391     618,467,640  0.96%  java.lang.String[] 
> 
> 7      14,974,224     479,175,168  0.74%  java.lang.String 
> 
> 8       9,585,494     460,103,712  0.71%  java.util.HashMap$Node 
> 
> 9      18,133,885     435,213,240  0.68%  
> org.apache.lucene.util.RoaringDocIdSet$ShortArrayDocIdSet 
> 
> 10       1,559,661     351,465,624  0.55%  java.util.HashMap$Node[] 
> 
> 11       4,132,738     264,495,232  0.41%  java.util.HashMap 
> 
> 12       1,519,178     243,068,480  0.38%  java.lang.reflect.Method 
> 
> 13       4,068,400     195,283,200  0.30%  
> com.sun.org.apache.xerces.internal.xni.QName 
> 
> 14       1,181,106     183,932,704  0.29%  
> org.apache.lucene.search.DocIdSet[] 
> 
> 15       5,721,339     183,082,848  0.28%  java.lang.StringBuilder 
> 
> 16       1,515,804     181,896,480  0.28%  java.lang.reflect.Field 
> 
> 17         348,720     134,652,416  0.21%  
> com.sun.org.apache.xerces.internal.xni.QName[] 
> 
> 18       3,358,251     134,330,040  0.21%  java.util.ArrayList 
> 
> 19       2,775,517      88,816,544  0.14%  org.apache.lucene.util.BytesRef 
> 
> total  232,140,701  64,452,630,104 
> 
> 
> We used LRUQueryCache with maxSize 1000 and maxRamBytesUsed   64MB. 
> 
> 
> 
> The coreClosedListeners occupied too much heap than I expected, is there any 
> reason for that? 
 
 
 
-- 
Adrien

Reply via email to