Same config? Do a compare with the new example config and see what settings
are different/changed. There may have been some defaults that changed. Read
the comments in the new config.
If you had just taken or merged the new config, then I would suggest making
sure that the update log is not enabled (or make sure you do hard commits
relatively frequently rather than only soft commits.)
-- Jack Krupansky
-----Original Message-----
From: Marc Des Garets
Sent: Thursday, April 11, 2013 3:07 AM
To: solr-user@lucene.apache.org
Subject: Re: migration solr 3.5 to 4.1 - JVM GC problems
Big heap because very large number of requests with more than 60 indexes
and hundreds of million of documents (all indexes together). My problem
is with solr 4.1. All is perfect with 3.5. I have 0.05 sec GCs every 1
or 2mn and 20Gb of the heap is used.
With the 4.1 indexes it uses 30Gb-33Gb, the survivor space is all weird
(it changed the size capacity to 6Mb at some point) and I have 2 sec GCs
every minute.
There must be something that has changed in 4.1 compared to 3.5 to cause
this behavior. It's the same requests, same schemas (excepted 4 fields
changed from sint to tint) and same config.
On 04/10/2013 07:38 PM, Shawn Heisey wrote:
On 4/10/2013 9:48 AM, Marc Des Garets wrote:
The JVM behavior is now radically different and doesn't seem to make
sense. I was using ConcMarkSweepGC. I am now trying the G1 collector.
The perm gen went from 410Mb to 600Mb.
The eden space usage is a lot bigger and the survivor space usage is
100% all the time.
I don't really understand what is happening. GC behavior really doesn't
seem right.
My jvm settings:
-d64 -server -Xms40g -Xmx40g -XX:+UseG1GC -XX:NewRatio=1
-XX:SurvivorRatio=3 -XX:PermSize=728m -XX:MaxPermSize=728m
As Otis has already asked, why do you have a 40GB heap? The only way I
can imagine that you would actually NEED a heap that big is if your
index size is measured in hundreds of gigabytes. If you really do need
a heap that big, you will probably need to go with a JVM like Zing. I
don't know how much Zing costs, but they claim to be able to make any
heap size perform well under any load. It is Linux-only.
I was running into extreme problems with GC pauses with my own setup,
and that was only with an 8GB heap. I was using the CMS collector and
NewRatio=1. Switching to G1 didn't help at all - it might have even
made the problem worse. I never did try the Zing JVM.
After a lot of experimentation (which I will admit was not done very
methodically) I found JVM options that have reduced the GC pause problem
greatly. Below is what I am using now on Solr 4.2.1 with a total
per-server index size of about 45GB. This works properly on CentOS 6
with Oracle Java 7u17, UseLargePages may require special kernel tuning
on other operating systems:
-Xmx6144M -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75
-XX:NewRatio=3 -XX:MaxTenuringThreshold=8 -XX:+CMSParallelRemarkEnabled
-XX:+ParallelRefProcEnabled -XX:+UseLargePages -XX:+AggressiveOpts
These options could probably use further tuning, but I haven't had time
for the kind of testing that will be required.
If you decide to pay someone to make the problem going away instead:
http://www.azulsystems.com/products/zing/whatisit
Thanks,
Shawn
This transmission is strictly confidential, possibly legally privileged, and
intended solely for the addressee.
Any views or opinions expressed within it are those of the author and do not
necessarily represent those of
192.com Ltd or any of its subsidiary companies. If you are not the intended
recipient then you must
not disclose, copy or take any action in reliance of this transmission. If
you have received this
transmission in error, please notify the sender as soon as possible. No
employee or agent is authorised
to conclude any binding agreement on behalf 192.com Ltd with another party
by email without express written
confirmation by an authorised employee of the company. http://www.192.com
(Tel: 08000 192 192).
192.com Ltd is incorporated in England and Wales, company number 07180348,
VAT No. GB 103226273.