You seemed to make my point...

Once you give access to memory (guest size plus vdisk), the System
Programmer doesn't have any control on the use/abuse of that resource.
If the abuse of memory, by an image, causes the CP paging system to
actively paging things out, it isn't the offending user that gets paged
out.  It is all the innocent by standers that get paged out.  Then their
performance goes down while they get their pages, paged back in.

The only control the System Programmer has, is:  You have it, or you
don't.  With CPU and I/O, I can throttle its usage to keep a wayward
machine from impacting everyone else.

And, right.  It is the responsibility for the VM Systems Programmer to
hand out resources and make sure the system isn't overloaded.

Going back to one of the original tangents to this thread and what I
have been talking about:

"A little caveat here. We originally used vdisk, but I was asked to
stop using it by our Operating System person. The reason being at the
time we were
memory constrained."

And one of the responses:

"You are joking, right?"

And I started to explain why there is a reason for no having vdisk (or
minimize its use).

OOM is bad.  You want to avoid it.
The only way to avoid OOM (besides having good code, which I can't
control), is to have sufficient swap space.
How much is sufficient swap space?  You estimate more.
Vdisk is great.  As long as it's usage doesn't swamp your paging
subsystem.
If you swamp your paging subsystem, performance suffers on all other
machines.
How to stop your paging subsystem from being swamped, take away all or
most all of the vdisks and let swap go to real disk.  (or get management
to get more real memory, or beef up the paging subsystem)

I'm glad I'm not in that realm anymore, at least for a while.

Having enough resources is only a temporary thing.

Tom Duerbusch
THD Consulting


>>> [EMAIL PROTECTED] 2/24/2007 11:50 AM >>>
>>> On Sat, Feb 24, 2007 at 12:25 PM, in message
<[EMAIL PROTECTED]>, Tom Duerbusch
<[EMAIL PROTECTED]> wrote:
> VDISK is different.  Actually, it isn't vdisk is different, it is
memory
> is different.

Actually, it's not.  It's a shared resource, and the systems programmer
has complete control over it.

> CP lets me throttle every other shared resource.
> I can SET SHARE to throttle or cap CPU.
> I can throttle or cap I/O rates for a particular user.

> But once I give a user access to virtual storage (max guest machine
> size plus vdisk(s)),  I can't throttle his usage.

That's like saying "once I turn quickdsp on for a guest, I can't
control it."  You control it by controlling access to it in the first
place, not by handing it out and praying the guest doesn't actually use
it.

For  Linux guests in particular, you  had better assume that it will
use all the virtual storage you define for it in the CP directory,
because it will.  The CP directory is where you control their usage of
the resource.  Not after the fact when things go haywire.  You do the
same for VDISK.  You control it at the point of allocation, not at the
point of use.  If the systems programmer is handing out too many
resources for the system to be able to handle it properly, that's not
the fault of the guests that are running.  That onus rests on the
systems programmer and management, if they haven't provided the tools to
do the job right.

The bottom line is, the smooth running of the system is the
responsibility of the systems programmer.  If they're not doing the job
right, bad things will happen, regardless of the resource being
discussed.


Mark Post

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390
or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to