Performance measurement tools also show their worth as a defense tool.  Here's 
only one of many examples I can reference from hard won experience.  Highly 
integrated application environment (PC's, aix, linux on the mainframe). 
Application reports "Transaction time out". z/VM and linux on the mf blamed - 
why? 'cause of the lack of understanding, new kid (sic) on the block, they 
don't own it, etc.

OK fair enough. If I'm the problem I'll say so and then fix it. Since I am 
gloriously reporting on z/VM data - in this case with Velocity's products - I 
can see what the linux virtual machines were doing. And it wasn't much - <1% 
during the problem time.
No queuing, no waiting on resources, no nothing. OK, fine, whatever, I *STILL* 
do some scheduler slamming knowing it won't do much. Transaction still times 
out. Now I say "OK I *REALLY* don't think I'm contributing to this problem."

App guys come back in a while to report the problem was a missing fix or three 
in *ANOTHER ENVRINOMENT*. Once they repair all is good.  Still can't determine 
if they were sheepish.

Point being that a strong defense can help win the game. Absent performance 
data who knows how long the dance would dragged on for?

Collect and report on that data.
David



-----Original Message-----
From: Linux on 390 Port on behalf of Alan Altmark
Sent: Sun 2/25/2007 10:33 AM
To: [email protected]
Subject: Re: Perceptions of  zLinux
 
On Sunday, 02/25/2007 at 08:33 ZE9, John Summerfield
<[EMAIL PROTECTED]> wrote:
> btw The rate of paging is fairly unimportant: in the 80s, we had an
> Amdahl 4360 paging at 600 pages/sec. We looked at the numbers, we looked
> at each other, we shrugged our shoulders: "TSO seems okay."
>
> Responsiveness and throughput are important. In our case, paging was
> "fairly brisk" but overall balance seemed good (CPU was suitably busy
too).

We've seen paging rates in the 10s of thousands of pages per second, so it
doesn't bother CP to page as long as he has the resources to do it: Enough
dedicated paging-only volumes, fast drives, fat pipes, multiple paths,
balanced I/O.

The performance measurement tools will tell you whether and how much
you're waiting on the paging subsystem.  So if it ain't broke, don't fix
it.  (Except to ensure multiple paths for availability.)  Point:  These
tools can tell you what NOT to change, too, saving you time (and money).

Except when you are just getting started, the "Crystal Ball Method of
Performance Management" is not really dependable.  And even then, it's
only for those who have a real understanding of How Things Work.  MY
production system will have performance management software (whether from
IBM or Velocity).  MY PoC system will have the same thing, but with some
PoC-level of support from the vendor.  There's nothing like having your
PoC go down the drain because there is a Performance Problem, and no one
can figure it out.  Or you get a CritSit going to get a SME in to solve
the problem, but you still take it on the chin.  "My <vendor> servers
don't have that problem!  Nyah!"

Alan Altmark
z/VM Development
IBM Endicott

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390



----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to