They don't really want to know MIPS. They want to know money. That is, how much 
does it cost to run transaction x. 

Perhaps expressing the CPU consumption as a percentage of the box's capacity 
would answer the question. Each engine has a fixed capacity of one CPU second 
per clock second, but the MIPS will vary depending on the speed of the engine 
(among other things).  If, for example, you run a four engine, 100 MIP box, 
then one CPU second = 25% of the box or 25 MIPS. Another example is a four 
engine 1000 MIP box, then one CPU second is still 25% of the box or 250 MIPS.   

Another way to express the capacity of a box is the theoretical maximum number 
of CPU seconds you can use. Again assuming that you have a four engine box, 
then the maximum number you can use is 345,600. (24 hours x 60 min x 60 sec x 
4). If this is a 100 MIP box, then each MIP is theoretically 34,560 CPU 
seconds. 

You should be able to see the problems that arise trying to express CPU seconds 
in MIPS.    

It get worse. We also have to consider the time value of that transaction. It 
may take more than one MIP to run 34,500 one second transactions with an 
acceptable response time.      

You should be able to see the problems that arise trying to express CPU seconds 
in MIPS.    

Remember what we are trying to do and who is asking. I'd offer the second 
calculation and see what they say. Remember, no jargon and KISS when framing 
your reply. 

HTH and good luck.     

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[email protected]] On Behalf Of 
Ramiro Camposagrado
Sent: Thursday, July 02, 2009 10:26 AM
To: [email protected]
Subject: Converting CPU Time to MIPS

Management is trying to cut and save CPU cost. So they came up with 
this "Cost Avoidance Committee". (Primarily consisting of uper management). 
Now they want us to somehow equate the CPU Time consumed (primarily by a 
CICS transaction) and equate that into MIPS. I know that folks have come up 
with formulas on how to do this. 

We use BEST/1 (Visualizer) for our performance reporting and it works off 15 
minute intervals. Within this 15 minute interval, we can run hundreds of CICS 
transactions. If we take one CICS transaction and convert the CPU time used
into MIPS, it is much higher than the 15 minute interval that is being reported 
 
in BEST/1. So there is really no way for us to correlate this number with our 
BEST/1 data. 

The performance data comes from MXG and it gets fed into BEST/1.
As far as I know, MXG does not report on MIP utilization per CICS transaction.
It does report on the elapsed time and CPU time for individual CPU transactions.
And it would be just to much of an effort for us to report on "MIPS used" for 
hundreds of CICS transactions.

Any suggestions or comments ??

Regards,
Ramiro

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
NOTICE: This electronic mail message and any files transmitted with it are 
intended
exclusively for the individual or entity to which it is addressed. The message, 
together with any attachment, may contain confidential and/or privileged 
information.
Any unauthorized review, use, printing, saving, copying, disclosure or 
distribution 
is strictly prohibited. If you have received this message in error, please 
immediately advise the sender by reply email and delete all copies.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to