Several of our search engines use pretty large heaps (12-24GB). That means
that if they *ever* do a full collection, disaster ensues because it can
take so long.
That means that we have to use concurrent collectors as much as possible and
make sure that the concurrent collectors get all the ephemeral garbage. One
server, for instance, uses the following java options:
These options give us lots of detail about what is happening in the
collections. Most importantly, we need to know that the tenuring
distribution never has any significant tail of objects that might survive
into the space that will cause a full collection. This is pretty safe in
general because our servers either create objects to respond to a single
request or create cached items that survive essentially forever.
Concurrent collectors are critical. We use the hbase recommendations here.
Max tenuring threshold is related to what we saw on the tenuring
distribution. We very rarely see any objects last 4 collections so we set
it so that it would have to last two more collections in order to become
tenured. The survivor ratio is related to this and is set based on
recommendations for non-stop, low latency servers.
CMS collections have couple of ways to be triggered. We limit it to a
single way to make the world simpler. Again, this is taken from outside
recommendations from the hbase guys and other commentors on the web.
I doubt that these are important. It is always nice to get more information
and I want to avoid any possibility of some library triggering a huge
If the parallel GC needs horsepower, I want it to get it.
Very rarely useful, but a royal pain if not installed. I don't know if it
has a performance impact (I think not).
Setting the minimum heap helps avoid full GC's during the early life of the
On Tue, Nov 10, 2009 at 11:27 AM, Patrick Hunt <ph...@apache.org> wrote:
> Can you elaborate on "gc tuning" - you are using the incremental collector?
> Ted Dunning wrote:
>> The server side is a fairly standard (but old) config:
>> Most of our clients now use 5 seconds as the timeout, but I think that we
>> went to longer timeouts in the past. Without digging in to determine the
>> truth of the matter, my guess is that we needed the longer timeouts before
>> we tuned the GC parameters and that after tuning GC, we were able to
>> to a more reasonable timeout. In retrospect, I think that we blamed EC2
>> some of our own GC misconfiguration.
>> I would not use our configuration here as canonical since we didn't apply
>> whole lot of brainpower to this problem.
>> On Tue, Nov 10, 2009 at 9:29 AM, Patrick Hunt <ph...@apache.org> wrote:
>> Ted, could you provide your configuration information for the cluster
>>> the client timeout you use), if you're willing I'd be happy to put this
>>> on the wiki for others interested in running in EC2.
Ted Dunning, CTO