2009/3/23 van Sleeuwen, Berry <[email protected]>:

> Perhaps I didn't put the question as I should have. I do not want to
> share minidisks, so RO or RW is not the issue here. These are all
> private RW minidisks. It is the sharing of DASD volumes with multiple
> minidisks that is of concern here. For instance we have 10 guests, each
> with an 300 cylinder minidisk all together on one DASD volume. So what
> would happen when these 10 guests are spread across our 4 VM's? Or a
> different guest that has just one minidisk on a volume with another
> guest that lives elsewhere. Would that hurt our guests in any way? The
> idea behind this is that when a guest is build the minidisk can be on
> the same VM but the guest could be started in a different VM for several
> reasons. But would that in turn hurt the guest that doesn't move over a
> the same time?

Right, so if you would have all things in place and prevent data
corruption, your benefit would be that each z/VM system could have
portions of that 300 cyl disk in MDC for the virtual servers that run
on that z/VM system today. The other z/VM systems would not touch
those mini disks and thus not waste MDC on them. The question is
whether the MDC benefits justify the required infrastructure.
In Linux terms these are tiny disks. It depends on what is on those
disks. If it is for example the boot partition, you probably don't
care at all whether data is cached. I'm not a fan of scattering Linux
data over the real volumes.

> Granted, I know you can't tell linux not to cache data. But in arguments
> with the linux guru's they want as much memory as they can get to
> privide for (massive amounts of) memory for cache. In that case I tell
> them they do not need to cache as much because we do MDC in VM.

MDC is probably the wrong ammunition in your fight...  With Linux
servers that are only used part of the time, we want to share real
memory resources rather than dedicate them. With Linux using its
memory for page cache, the LRU scheme interferes with the z/VM memory
management to share memory resources in a transparent way.
Unlike some other platforms, we can do I/O in parallel with CPU
processing so we are less afraid of doing I/O. Some folks get confused
because they look at single-server maximum peak performance, but that
is rarely a factor in your business justification. When you look at
total achieved workload throughput within SLA, then you will find that
avoiding I/O is not the best way to use your resources.

Even when the virtual machine is completely resident and does not
suffer page faults on z/VM, there are still workloads that actually
perform *worse* when you give them more memory than they need.

> Your comments on MDC are clear. I haven't yet tried to measure the MDC
> benefits. I have seen cache hit ratio's below 10% but also over 80%. The
> first would indicate that that minidisk isn't preferred for MDC, the
> latter would be a perfect candidate for MDC. I don't yet know if we
> should disable MDC. Currently we set MINIOPT NOMDC for all minidisk swap
> disks but perhaps there are other minidisks that could also be set to
> NOMDC. But in any case, if we have to set DASD to SHARED then we will
> not use MDC for sure, even for disks that might do well for MDC.

It's not just cache hit ratio. It's also I/O rates. An 80% hit ratio
for 1 I/O per minute does not impress the chicks. ;-)  You probably
find better candidates in your CMS workload where I/O density is much
higher.
My swap disks are VDISK, so MDC does not apply. Yours should probably be too.

Rob
--
Rob van der Heij
Velocity Software
http://www.velocitysoftware.com/

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to