On Fri, Feb 13, 2009 at 3:22 PM, Johansen.Klaus KYJ <[email protected]> wrote:

> If you want to drive cmm_pages according to current memory/storage
> restraint then yes, VM is the only place with relavant metrics. VM does
> on the onther hand not know much about what happens inside the guest
> (cmm2 not considered). Why shouldn't the guest voluntarily give up most
> free memory and file cache above e.g. 300 MB (of course depending on the
> workload)?

The idea is to give Linux more resources when there is more available,
so the server can take advantage of that.
Without such a mechanism, you can't do much better than make each
Linux pretty small (so effectively do what you suggest: leave only
limited space to cache data). This works best when you run one
application per server.

For some applications, using cmm_pages in a kind of scheduled way (eg
squeeze during day shift and let air out for nightly backups) may work
well as a manual compromise.

The problem imho is that we don't have metrics in Linux to decide
which data in page cache is "luxury" storage. The fact that the page
is also on disk does not imply that you don't need it in memory as
well (eg program code, data swapped in again, shared libraries,
mmapped files, shared process memory). So some workloads will not show
high swap rates when you squeeze it too hard.

When you combine the Linux metrics and VM metrics in a single place,
it is often easier to see what is happening and where you have excess
memory. Especially when you retain performance history information and
can see the growth over time.

Rob
--
Rob van der Heij
Velocity Software
http://www.velocitysoftware.com/

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to