Helpful new JVM parameters

2011-03-17 Thread Dyer, James
We're on the final stretch in getting our product database in Production with 
Solr.  We have 13m wide-ish records with quite a few stored fields in a 
single index (no shards).  We sort on at least a dozen fields and facet on 
20-30.  One thing that came up in QA testing is we were getting full gc's due 
to promotion failed conditions.  This led us to believe we were dealing with 
large objects being created and a fragmented old generation.  After improving, 
but not solving, the problem by tweaking conventional jvm parameters, our JVM 
expert learned about some newer tuning params included in Sun/Oracle's JDK 
1.6.0_24 (we're running RHEL x64, but I think these are available on other 
platforms too):

These 3 options dramatically reduced the # objects getting promoted into the 
Old Gen, reducing fragmentation and CMS frequency  time:
-XX:+UseStringCache
-XX:+OptimizeStringConcat
-XX:+UseCompressedStrings

This uses compressed pointers on a 64-bit JVM, significantly reducing the 
memory  performance penalty in using a 64-bit jvm over 32-bit.  This reduced 
our new GC (ParNew) time significantly:
-XX:+UseCompressedOops

The default for this was causing CMS to begin too late sometimes.  (the 
documentated 68% proved false in our case.  We figured it was defaulting close 
to 90%)  Much lower than 75%, though, and CMS ran far too often:
-XX:CMSInitiatingOccupancyFraction=75

This made the stop-the-world pauses during CMS much shorter:
-XX:+CMSParallelRemarkEnabled

We use these in conjunction with CMS/ParNew and a 22gb heap (64gb total on the 
box), with a 1.2G newSize/maxNewSize.

In case anyone else is having similar issues, we thought we would share our 
experience with these newer options.

James Dyer
E-Commerce Systems
Ingram Content Group
(615) 213-4311



Re: Helpful new JVM parameters

2011-03-17 Thread Jonathan Rochkind
Awesome, very helpful. Do you maybe want to add this to the Solr wiki 
somewhere?  Finding some advice for JVM tuning for Solr can be 
challenging, and you've explained what you did and why very well.


On 3/17/2011 2:59 PM, Dyer, James wrote:

We're on the final stretch in getting our product database in Production with Solr.  We have 13m 
wide-ish records with quite a few stored fields in a single index (no shards).  We sort on at 
least a dozen fields and facet on 20-30.  One thing that came up in QA testing is we were getting full gc's 
due to promotion failed conditions.  This led us to believe we were dealing with large objects 
being created and a fragmented old generation.  After improving, but not solving, the problem by tweaking 
conventional jvm parameters, our JVM expert learned about some newer tuning params included in 
Sun/Oracle's JDK 1.6.0_24 (we're running RHEL x64, but I think these are available on other platforms too):

These 3 options dramatically reduced the # objects getting promoted into the Old 
Gen, reducing fragmentation and CMS frequency  time:
-XX:+UseStringCache
-XX:+OptimizeStringConcat
-XX:+UseCompressedStrings

This uses compressed pointers on a 64-bit JVM, significantly reducing the 
memory  performance penalty in using a 64-bit jvm over 32-bit.  This reduced 
our new GC (ParNew) time significantly:
-XX:+UseCompressedOops

The default for this was causing CMS to begin too late sometimes.  (the 
documentated 68% proved false in our case.  We figured it was defaulting close 
to 90%)  Much lower than 75%, though, and CMS ran far too often:
-XX:CMSInitiatingOccupancyFraction=75

This made the stop-the-world pauses during CMS much shorter:
-XX:+CMSParallelRemarkEnabled

We use these in conjunction with CMS/ParNew and a 22gb heap (64gb total on the 
box), with a 1.2G newSize/maxNewSize.

In case anyone else is having similar issues, we thought we would share our 
experience with these newer options.

James Dyer
E-Commerce Systems
Ingram Content Group
(615) 213-4311




Re: Helpful new JVM parameters

2011-03-17 Thread Li Li
will UseCompressedOops be useful? for application using less than 4GB
memory, it will be better that 64bit reference. But for larger memory
using application, it will not be cache friendly.
JRocket the definite guide says: Naturally, 64 GB isn't a
theoretical limit but just an example. It was mentioned because
compressed references on 64-GB heaps have proven beneficial compared
to full 64-bit pointers in some benchmarks and applications. What
really matters, is how many bits can be spared and the performance
benefit of this approach. In some cases, it might just be easier to
use full length 64-bit pointers.

2011/3/18 Dyer, James james.d...@ingrambook.com:
 We're on the final stretch in getting our product database in Production with 
 Solr.  We have 13m wide-ish records with quite a few stored fields in a 
 single index (no shards).  We sort on at least a dozen fields and facet on 
 20-30.  One thing that came up in QA testing is we were getting full gc's due 
 to promotion failed conditions.  This led us to believe we were dealing 
 with large objects being created and a fragmented old generation.  After 
 improving, but not solving, the problem by tweaking conventional jvm 
 parameters, our JVM expert learned about some newer tuning params included in 
 Sun/Oracle's JDK 1.6.0_24 (we're running RHEL x64, but I think these are 
 available on other platforms too):

 These 3 options dramatically reduced the # objects getting promoted into the 
 Old Gen, reducing fragmentation and CMS frequency  time:
 -XX:+UseStringCache
 -XX:+OptimizeStringConcat
 -XX:+UseCompressedStrings

 This uses compressed pointers on a 64-bit JVM, significantly reducing the 
 memory  performance penalty in using a 64-bit jvm over 32-bit.  This reduced 
 our new GC (ParNew) time significantly:
 -XX:+UseCompressedOops

 The default for this was causing CMS to begin too late sometimes.  (the 
 documentated 68% proved false in our case.  We figured it was defaulting 
 close to 90%)  Much lower than 75%, though, and CMS ran far too often:
 -XX:CMSInitiatingOccupancyFraction=75

 This made the stop-the-world pauses during CMS much shorter:
 -XX:+CMSParallelRemarkEnabled

 We use these in conjunction with CMS/ParNew and a 22gb heap (64gb total on 
 the box), with a 1.2G newSize/maxNewSize.

 In case anyone else is having similar issues, we thought we would share our 
 experience with these newer options.

 James Dyer
 E-Commerce Systems
 Ingram Content Group
 (615) 213-4311




RE: Helpful new JVM parameters

2011-03-17 Thread Dyer, James
Our tests showed, in our situation, the compressed oops flag caused our minor 
(ParNew) generation time to decrease significantly.   We're using a larger heap 
(22gb) and our index size is somewhere in the 40's gb total.  I guess with any 
of these jvm parameters, it all depends on your situation and you need to test. 
 In our case, this flag solved a real problem we were having.  Whoever wrote 
the JRocket book you refer to no doubt had other scenarios in mind...

James Dyer
E-Commerce Systems
Ingram Content Group
(615) 213-4311


-Original Message-
From: Li Li [mailto:fancye...@gmail.com] 
Sent: Thursday, March 17, 2011 10:38 PM
To: solr-user@lucene.apache.org
Subject: Re: Helpful new JVM parameters

will UseCompressedOops be useful? for application using less than 4GB
memory, it will be better that 64bit reference. But for larger memory
using application, it will not be cache friendly.
JRocket the definite guide says: Naturally, 64 GB isn't a
theoretical limit but just an example. It was mentioned because
compressed references on 64-GB heaps have proven beneficial compared
to full 64-bit pointers in some benchmarks and applications. What
really matters, is how many bits can be spared and the performance
benefit of this approach. In some cases, it might just be easier to
use full length 64-bit pointers.

2011/3/18 Dyer, James james.d...@ingrambook.com:
 We're on the final stretch in getting our product database in Production with 
 Solr.  We have 13m wide-ish records with quite a few stored fields in a 
 single index (no shards).  We sort on at least a dozen fields and facet on 
 20-30.  One thing that came up in QA testing is we were getting full gc's due 
 to promotion failed conditions.  This led us to believe we were dealing 
 with large objects being created and a fragmented old generation.  After 
 improving, but not solving, the problem by tweaking conventional jvm 
 parameters, our JVM expert learned about some newer tuning params included in 
 Sun/Oracle's JDK 1.6.0_24 (we're running RHEL x64, but I think these are 
 available on other platforms too):

 These 3 options dramatically reduced the # objects getting promoted into the 
 Old Gen, reducing fragmentation and CMS frequency  time:
 -XX:+UseStringCache
 -XX:+OptimizeStringConcat
 -XX:+UseCompressedStrings

 This uses compressed pointers on a 64-bit JVM, significantly reducing the 
 memory  performance penalty in using a 64-bit jvm over 32-bit.  This reduced 
 our new GC (ParNew) time significantly:
 -XX:+UseCompressedOops

 The default for this was causing CMS to begin too late sometimes.  (the 
 documentated 68% proved false in our case.  We figured it was defaulting 
 close to 90%)  Much lower than 75%, though, and CMS ran far too often:
 -XX:CMSInitiatingOccupancyFraction=75

 This made the stop-the-world pauses during CMS much shorter:
 -XX:+CMSParallelRemarkEnabled

 We use these in conjunction with CMS/ParNew and a 22gb heap (64gb total on 
 the box), with a 1.2G newSize/maxNewSize.

 In case anyone else is having similar issues, we thought we would share our 
 experience with these newer options.

 James Dyer
 E-Commerce Systems
 Ingram Content Group
 (615) 213-4311