You can use SET SHARE to stop the CPU runaways.
We have one bad app server we decided to cap permanently. 
The application code problems have actully been on a Windows server -but
it impacts the oracle db on the mainframe- we just capped the test db
server so it doesn't impact the other servers.

-----Original Message-----
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of RPN01
Sent: Wednesday, March 19, 2008 3:56 PM
To: [email protected]
Subject: Re: Number of IFLs to zLinux guests

We're running about 40 Linux guests on a total of four IFLs, spread
across two LPARs and CECs. Roughly 20 guests in each LPAR.

We're about to add an additional IFL to each LPAR, not for capacity,
since we're normally running at about 10 - 20 percent, but to handle the
case where a single rogue guest consumes everything it can get for
awhile. We'll continue to build one and two virtual CPU guests, but have
the third engine to avoid allowing a single guest with dual virtual CPUs
from eating the entire system.

-- 
Robert P. Nix          Mayo Foundation        .~.
RO-OE-5-55             200 First Street SW    /V\
507-284-0844           Rochester, MN 55905   /( )\
-----                                        ^^-^^
"In theory, theory and practice are the same, but  in practice, theory
and practice are different."




On 3/19/08 2:36 PM, "Marcy Cortes" <[EMAIL PROTECTED]>
wrote:

> z9 ECs here and a z10, but I'm not counting him yet...
> 
> In prod - we have 45 or so guests on 18 IFLs In test - we have had 100

> or so guests on 2 IFLs.
>  
> So, the ratio is either 50:1 or 5:2 :)
>  
> You can see why everyone says "it depends".
>  
> If you can get the testers to play well together (wiki page 
> coordination is what we do), I think one could have a ratio to work
with in test.
> Prod is going to require measurement and planning :) Memory is usually

> a bigger issue than CPU.  At least with CPU you can prioritize things 
> so not all feel it.  If you are heavily paging, everyone feels it. 
> (i.e 100 on 24G = pain, 100 on 48G = happiness)
>  
> (PS.  We started with 2 :)
> 
> 
> Marcy Cortes
> 
> "This message may contain confidential and/or privileged information. 
> If you are not the addressee or authorized to receive this for the 
> addressee, you must not use, copy, disclose, or take any action based 
> on this message or any information herein. If you have received this 
> message in error, please advise the sender immediately by reply e-mail

> and delete this message. Thank you for your cooperation."
> 
>  
> 
> ________________________________
> 
> From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] 
> On Behalf Of Harris, Nick J.
> Sent: Wednesday, March 19, 2008 12:20 PM
> To: [email protected]
> Subject: [IBMVM] Number of IFLs to zLinux guests
> 
> 
> 
> Hello All,
> 
> Is there a golden rule about the number of IFLs to support production 
> Linux guests?  I understand all shops are different and 'it depends'
> applies...but we are curious as to the ratio of IFLs to Linux guest in

> other shops.
> 
>  
> 
> Are there any shops out there running zLinux guests in production 
> under z/VM with two or less IFLs besides us?
> 
>  
> 
> We have four production Linux guests and twenty one test Linux guests 
> supported by two IFLs on a z9BC.
> 
>  
> 
> TIA
> 
>  
> 
>  
> 
> Thanks,
> 
> Nick Harris
> 
> Lead Systems Programmer
> 
> Texas Farm Bureau Mutual Insurance Company
> 
> 7420 Fish Pond Rd.
> 
> Waco, TX 76710
> 
> 254.751.2259
> 
> [EMAIL PROTECTED]
> 
>  


*************************************************************************
This communication, including attachments, is
for the exclusive use of addressee and may contain proprietary,
confidential and/or privileged information.  If you are not the intended
recipient, any use, copying, disclosure, dissemination or distribution is
strictly prohibited.  If you are not the intended recipient, please notify
the sender immediately by return e-mail, delete this communication and
destroy all copies.
*************************************************************************

Reply via email to