I think you might be talking about generational garbage collection 
(-Xgcpolicy:gencon  ).
It's not the default in WAS 6 (not sure about 7).
Our biggest app sees much better throughput and less CPU usage using it.



Marcy 

marcy.d.cor...@wellsfargo.com
"This message may contain confidential and/or privileged information. If you 
are not the addressee or authorized to receive this for the addressee, you must 
not use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation."


-----Original Message-----
From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On Behalf Of Rob van 
der Heij
Sent: Thursday, June 24, 2010 1:44 AM
To: LINUX-390@vm.marist.edu
Subject: Re: [LINUX-390] Memory use question

On Thu, Jun 24, 2010 at 9:59 AM, Rodger Donaldson
<rodg...@diaspora.gen.nz> wrote:

> Well, bearing in mind both the Sun and IBM JVMs default to memory
> settings that both IBM and Sun say are crap for app servers (e.g.
> neither using their 1.4 or later GC algorithms by default), I'm not sure
> I'd place too much stock in their defaults.

It's my understanding that the WAS properties (or such) override the
setting. I guess I'll have to look into the typical settings then...
One of the design choices with the JVM is that memory management data
(counters, pointers, etc) are within the objects being managed. This
is not entirely lucky for virtualized environment because it means
that pages with allocated old objects are referenced and thus paged
back in. With small enough objects, a GC scan needs the entire heap
resident. I thought modern GC strategies identified generations of
data in the heap that would allow old data to be skipped in most
scans.

> Our experince, both on standalone Solaris boxes using the Sun JVM, and
> on our zLinux guests using the IBM JVM, has been that large min/max
> separation has produced poor results, as the JVM tends to consistently
> try to push memory use back down to the min setting, often invoking
> unnecessary GCs to do so.

Memory has to come from somewhere. Whether a GC was unnecessary is
hard to tell, especially in a virtualized environment. While it is
true GC takes valuable resources, so does any (memory) management.
Simply the fact that you can't identify the cost does not mean it's
for free...

With an application server that does not hold data (data resides in
the database elsewhere) we could expect memory requirements to consist
of a base level plus some amount for each active connection. I've
talked to application developers who had very good understanding of
the requirements per transaction. Depending on typical volume,
duration and freqeuency of such active connections, you get a typical
heap size. If you set the minimum a bit aobve that, malloc() / free()
calls should happen only around periods with significantly more active
connections. I recall you can also change the chunk size to create
some hystheresis.

Don't know whether this model still works now that developers start to
implement cache with in-memory databases on top of GC-managed
objects...

> Using our actual applications with our stress test suites (which tend to
> have pretty good predictave power) we tend to see worse application
> behaviour (small stalls, higher average response times) and higher CPU
> use when giving a big spread for min/max; on an older Sun JVM/Solaris
> combo we also saw failure in the application to ramp up quickly enough
> to deal with load spikes, with big stalls as the JVM tried to alloc memory.

Don't know about yours, but stress tests often involve periods with
high utilization. While you need that stake in the ground, it's not
the best one to measure scalability: if you need the resource 90% of
the time, you worry less about how to share it during the remaining
10%. But if you have an average utilization of 5% and a duty cycle of
an hour, then it makes sense trying to share resources even if it
costs you something extra (during the 5% that your application is
active).

> (That last behaviour was common to connection pooling - if we didn't
> open enough connections to Oracle at startup time, the app could have
> bad performance or even fail as load ramped up due to an inability to
> open connections quickly enough.)

Though it might sound nice to have a large connection pool, if that
means the data is referenced very rarely (would not be surprised to
see round-robin rather than push-down stack) then you may get paged on
z/VM level and held back even longer.

Rob
--
Rob van der Heij
Velocity Software
http://www.velocitysoftware.com/

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

Reply via email to