On Fri, Jun 20, 2014 at 8:14 AM, Vernooij, CP (SPLXM) - KLM <
[email protected]> wrote:

> John,
>
> I usually hate replies, that don't answer the question, but instead state:
> why don't you try it this way. However, this time I would like to ask some
> 'what are you doing' questions, in spite of your last remark.
>

In this case, I can use "why don't you do it this way?" questions. I'm a
bit a sea at present (in a row boat, no less).


>
> 1. The product that produces the CMFCPU13/14/15 messages also produces
> your RMF 72 records. From those I produce all my statistics on quarterly or
> hourly intervals, be it through CA MICS, but you can do it also via SAS /
> MXG (or some RMF reporting tool I believe). You can even download the SMF
> records to your Linux or Windows system and process them with SAS or a
> similar product. Did you try this?
>

No. We have no software on z/OS which can _easily_ do anything. We had
SAS/MXG long ago. "Too expensive!". All that I have in my quiver right now
on z/OS are COBOL and HLASM. My heavy artillery is on my Linux desktop:
Perl, awk, PostgreSQL relational data base, R language (conceptually
similar to SAS, but not the same language).


>
> 2. What conclusions do you want to pull out of the figures? You know, that
> these 'LPAR is capped' figures have only a slight relation with the
> performance of those LPARs. If you have a road sign stating there is a
> speed limit of 100 m/h, that road is 'capped', but the capping won't hardly
> create performance problems.
>


This is a bit more difficult. The original spark was from one of our
Production Control people. She basically wanted to look at whether it would
be "helpful" in any way to reduce our active initiator count during certain
time frames. So I was going to use the LPAR capping as a time when the CEC
is "overloaded". If the CEC is overloaded, then it wouldn't hurt to reduce
the number of active initiators. And it might actually help because then
individual jobs would probably finish a bit sooner, there being less
multi-job overhead. Again, we are grasping at straws for CPU sometimes.

Another thing which is a "biggie" is the 4 hour rolling average MSU. Why?
Because if we are running below our cap, then we are "saving up" MSUs. What
this does is mean that we can use some of these "saved" MSUs to exceed our
Group Capacity for a short time to cover some times where we get a "CPU
spike". Having "MSUs in the bank" gives our managers & Production Control
people "warm fuzzies" and good feelings. Like I feel when I have a full
tank of gas in the car (or my tummy <grin/>).

Also, for whatever reason, IT management (my boss, his colleagues, and
higher managers) seem to have a "thing" about the machine being "capped".
So this is just one way to try to present information to them that they
seem to want. IOW, I get an "attaboy" award for a pretty graph. For me,
personally, I don't care about performance until we start missing SLAs.
Production Control wants jobs finished as fast as possible so that problems
are detected earlier and there is more time to fix them before we get
"dinged" for missing an SLA. In the main, this means that we end up
finishing early.

We are doing a lot of "strange", and perhaps even foolish, things. The
reason is that IT management wants to decrease our MSU max, because for
each reduction of 1 MSU, we get a price break of $12,000 / year. Seems
little, but we are scrounging pennies. This despite the fact that the z/OS
is scheduled to die in Dec 2015.


>
> Kees.
>
>
-- 
There is nothing more pleasant than traveling and meeting new people!
Genghis Khan

Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to