The best way to understand is to take measurements of the running
production systems.  There are many tools for doing this and you may
already be gathering at least  the data that you would need.   The way to
look at utilization is to plot the utilization on intervals for a peak
period, day or week depending on the data you get.  You should be looking
for intervals around 15 minutes in length. (Over 30 minutes smooths things
too much and under 5 minutes tends to visually (and mathematically) hide
the troughs.  This is because we use the peak to do our sizings and as
intervals get shorter the peak approaches 100% regardless of the
utilization.  (utilization is a statistic, at the cycle level the machine
is either busy or its not. For short enough intervals the utilization data
will either be zero or 100%.)

Anyway,  you then stack these graphs in a stacked bar or area chart and
then note the composite peak.  This will tell you what your aggregate
utilization is. If all the servers peak at the same time, then the peak
utilization on each one has to be low to get  favorable consolidation
effects.

The second thing to do is to look at the saturation curve for the servers
in question.  Gather throughput data on the same intervals as your
utilization data. (This can be network data or packet rates if you don't
have anything else).  Plot the throughput v utilization for each server.
You are looking to see if the curve bends over or "saturates" at higher
utilization.   I like to plot linear, "power" and logaritmic trends through
the data.  Usually the power curve has the best fit, but the linear and
logarithmic curves provide bounds with which to compare it.  The more
linear the data is the more cpu intense the application is, and therefore
the lower the utilization has to be to get a good conversion ratio.  If the
curve is bent over, the average utilzation is low, and the workload peaks
at an off time you have an ideal candidate.

You can also start gathering I/O and context switch rates.  High rates here
usually indicate non cpu intense or "mixed" applications.

IBM has people who can help you get the data and analyze it.




Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



                      Eric Sammons
                      <[EMAIL PROTECTED]        To:       [EMAIL PROTECTED]
                      t.frb.org>               cc:
                      Sent by: Linux on        Subject:  Re: Perpetuating Myths about 
the zSeries
                      390 Port
                      <[EMAIL PROTECTED]
                      IST.EDU>


                      10/31/2003 07:55
                      AM
                      Please respond to
                      Linux on 390 Port






What about memory intensive?  And how do you gage the CPU intensive
applications?  For example we are planning to migrate some of our Solaris
(SPARC) applications off of SPARC and into the z/VM Linux world.  If I am
looking at candidates for this migration I see systems (SPARC) with 10 -
30 percent utilization.  What happens when I decide these word loads are
good candidates with their low cpu usage on the SPARC platform but then
install them into the Z environment and find out that they now have a cpu
usage of 80 - 90 percent?  Is this possible?  Is there a good way to judge
what applications on a given platform might be best suited for migration?
Right now I am recommending that any candidate first do a QA of their
application in the Z environment prior to do doing the full and final
migration.

thanks!
Eric Sammons
(804)697-3925
FRIT - Infrastructure Engineering





"Post, Mark K" <[EMAIL PROTECTED]>
Sent by: Linux on 390 Port <[EMAIL PROTECTED]>
10/30/2003 04:49 PM
Please respond to Linux on 390 Port

        To:     [EMAIL PROTECTED]
        cc:
        Subject:        Re: Perpetuating Myths about the zSeries

My answer was, and still is (and likely always will be) avoid any
application that is CPU intensive.  Yes, the zSeries has gotten faster,
but
so has Intel.  The price-performance curve for CPU intensive work still
favors Intel.  I've seen nothing in the IBM announcements that would lead
me
to change any of the recommendations I've been making for the last 3
years.
Unless and until the price-performance curve for zSeries matches that of
Intel (or comes a couple of orders of magnitude closer), I will continue
to
make the same recommendations.


Mark Post

-----Original Message-----
From: Jim Sibley [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 29, 2003 7:31 PM
To: [EMAIL PROTECTED]
Subject: Re: Perpetuating Myths about the zSeries


-snip-
Linux on all sorts of platforms was just a gleam in
someone's eye 5 years ago.  It started getting pushed
on the zSeries 3 years ago and the software and
hardware have made great strides in the last 3 years.

So CGI may not be appropriate today. So what is there
we said was not appropriate 2 or 3 years ago that may
be appropriate today on Linux zSeries?



=====
Jim Sibley
Implementor of Linux on zSeries in the beautiful Silicon Valley

"Computer are useless.They can only give answers." Pablo Picasso

__________________________________
Do you Yahoo!?
Exclusive Video Premiere - Britney Spears
http://launch.yahoo.com/promos/britneyspears/

Reply via email to