On Tue, Sep 18, 2012 at 7:45 AM, Bernd Fehling
<bernd.fehl...@uni-bielefeld.de> wrote:
> I used GC in different situations and tried back and forth.
> Yes, it reduces the used heap memory, but not by 5GB.
> Even so that GC from jconsole (or jvisualvm) is "Full GC".

Whatever "Full GC" means ;-)
In the past at least, I've found that I had to hit "Full GC" from
jconsole many times in a row until heap usage stabilizes at it's
lowest point.

You could check fieldCache and fieldValueCache to see how many entries
there are before and after the memory bump.
If that doesn't show anything different, I guess you may need to
resort to a heap dump before and after.

> But while you bring GC into this, there is another interesting thing.
> - I have one slave running for a week which ends up around 18 to 20GB of heap 
> memory.
> - the slave goes offline for replication (no user queries on this slave)
> - the slave gets replicated and starts a new searcher
> - the heap memory of the slave is still around 11 to 12GB
> - then I initiate a Full GC from jconsole which brings it down to about 8GB
> - then I call optimize (on a optimized index) and it then drops to 6.5GB like 
> a fresh started system
>
>
> I have already looked through Uwe's blog but he says "...As a rule of thumb: 
> Don’t use more
> than 1/4 of your physical memory as heap space for Java running 
> Lucene/Solr,..."
> That would be on my server 8GB for JVM heap, can't believe that the system
> will run for longer than 10 minutes with 8GB heap.

As you probably know, it depends hugely on the usecases/queries: some
configurations would be fine with a small amount of heap, other
configurations that facet and sort on tons of different fields would
not be.


-Yonik
http://lucidworks.com

Reply via email to