On Wed, Aug 6, 2008 at 8:06 AM, Alan Ackerman
<[EMAIL PROTECTED]> wrote:

> Deleting memory is a lot harder.

Many delicate CP areas now also are in virtual memory as a result of
the 2G relief, even though not all may be paged.
It depends a lot on the granularity of the next level of mapping. When
90% of your in-use pages is virtual machine pages, you can probably
find 16 MB worth of individual pages to give up. But evacuating pages
at random in the hope to free an entire 16MB chunk is less attractive
(I think some time ago pages under the bar would be freed like that).

> Do people really have Linux systems that run 7 x 24?

There are a lot of installations expecting their Linux guests on z/VM
to be "always there" and the applications have been spoiled by
hardware that was "almost always there" and tend to take that as their
SLA, avoiding the need for redundancy and fail-over on the application
level.

You're very right that traditional installations approach this from
another direction, understanding that (planned) outages for each
component add up (multiply). They have applications designed to move
and shift to achieve a higher availability than a single operating
system. For example running the back-end with DB/2 on z/OS and the
front-end with Linux on z/VM (where the front-end does not hold
application data and is volatile).
Once your applications can do that, the investment to transform a
single system from "almost 7x24" to "full 7x24" is not justified
anymore.

Rob

Reply via email to