>>> On 2/11/2009 at  8:47 AM, Erling Ringen Elvsrud <[email protected]> wrote: 
-snip-
> First, thanks for your informative reply. One argument against linux
> on z/VM I have heard a lot is that memory on a mainframe is expensive
> compared to memory for vmWare ESX-hosts.
> Do you know any rough estimates for how much physical memory that is
> needed for each linux host on z/VM that is running WAS? I know z/VM
> implements various techniques to reduce overall physical memory
> requirements among a set of hosts, but how effective is it?

Yes, it's something that has been a point of contention for quite some time.  
Even with the price reduction that came with the z10, it's still more expensive 
than a lot of people like.

The effectiveness is going to vary quite a bit with the particular workload an 
application.  Tuning application code for any particular performance problem 
will usually provide 80% of any relief from that problem.  But, from what I've 
been told by people doing the work, an over-commit ratio of 1.5 is fairly 
typical, and some people get as much as 2.0.  Just to be clear, the 1.5 means 
that if you have 100GB on the box, you can have 150GB of virtual storage in use 
by the guests.

Mitigating all of this is that z/VM is capable of paging much more than 
distributed systems without a performance impact because the I/O subsystem is 
so much better.  The other factor is that Linux for System z and z/VM can 
cooperate in memory management to reduce the amount of "double paging" that you 
run into on any virtualization platform.  There are other facilities in z/VM 
that Linux has been "taught" to use, such as DisContiguous Saved Segments that 
can reduce disk I/O as well as real memory consumption.

All in all, the big key is to not give in to developers' perceptions that they 
need as much memory for their virtualized guests as they (think) they do on 
discrete hardware.  We've seen any number of cases where Oracle databases run 
quite nicely on 4-8GB of virtual memory when the developers and DBAs thought 
they needed 32-64GB because that's what they had on Intel/AMD.  Each case is 
different, of course, and may require some application rework to get SGA sizes 
down to a more reasonable level without impacting performance of the 
application.  But, with a little time and effort, it can _usually_ be done.

All that said, even with the price of real storage for the z10 being as high as 
it is, it almost always turns out cheaper to run the kind of workload you say 
you have on Linux for System z than it is on VMware.  If you talk to your IBM 
rep, or one of IBM's Business Partners, they have tools that can help you 
figure out just how much that might be before you make any commitment to buy a 
thing.  They've been refining it for a number of years now, and if they've got 
good numbers to put into them, they can get fairly close.


Mark Post

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to