Re: Calculating SIIS% as MSUs or MIPS

2023-05-05 Thread Andreas von Imhof
Dump 70,72,113 from every system on the z server

Run the following build: MYIN is FB 80

%LET MACKEEP=  MACRO _XLA113  _XLA113F  % ;
%UTILBLDP(BUILDPDB=NO,USERADD=7072 113,
  OUTFILE=MYIN,
  WANTSMF=70.1 113.1,  
  INCLAFTR=ASUM70PR ASUM113);  
  %INCLUDE MYIN; RUN;  

set dagje.ASUM113 (obs=100);

For a z15 the following calc when listing
z15 E164 / B2 * 100% I Writes sourced with L3 intervention / I Writes

siis = (extnd164 / basic02) / 100 ;

SMF70CIN gives you CP or IIP
CECSER for the CEC if you have more than one
SMF70CPA the SU-SEC (see I like MSU)
MIPSEXEC for the executed MIPS (only if you are old like me)

Barry makes it all very easy for us.

PS if SAS / MXG is not running on your clients system but on your own system 
then please check for SW license issues 1st.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Calculating SIIS% as MSUs or MIPS

2023-05-05 Thread Andreas von Imhof
Using type 30's is not my 1st choice. Need to calculate capture ratio's, 
preferably by workload type. Cannot remember if DB2 type enclaves are captured 
here or it is all lumped into DB AS.

I think that Shivang wants a total view. The document from John Burg spells it 
out quite clearly.
eg page 4: z16 E170 / B2 * 100% I Writes sourced with L2 intervention / I Write

I would take type 70's, match the intervals with the 113's and calculate the 
percentage.
Never use MIPS, use HW or SW MSU (clearly state which one you are using). 
Why MSU? All contracts / pricing are based on this.
MSU is also very easy to match to type 72's. Type 72's makes reporting easy 
peasy.

Shivang, what tooling are you going to use to do this?

Andreas von Imhof
Capacity & Performance z/OS - zCAP 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSORT / ICETOOL SPLICE function

2022-09-30 Thread Andreas von Imhof
Hi Sri,
It works! Thank you so much!

You made the statements very simple (eazy peazy). Any fool (just like me) can 
use it.

The saving:
2.09 million input records

REXX  - 9 minutes clock time, 6 minutes CPU (IBM Z15 8561-723)
DFSORT - 9 seconds clock time, 0.78 seconds CPU

Unbelievable!

Kind regards, Andreas

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSORT / ICETOOL SPLICE function

2022-09-30 Thread Andreas von Imhof
Hi Sri
This is great! Thank you!
it's almost there. It only processes the 1st occurrence of "CURRENT PLAN 
OPERATION", and outputs 1 record correctly. 
There are many occurrences of "CURRENT PLAN OPERATION", and I need a record for 
each. Could you please help with the last bit?
Kind regards, Andreas

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSORT / ICETOOL SPLICE function

2022-09-30 Thread Andreas von Imhof
Hi Max

This was my miserable attempt (with various variations).
Just keep getting the very 1st and very last record merged in both //SPLCE and 
//OUT1 datasets.

* Top of Data **
+1+2+3+4+5+6+7+8
 REC RD VERSI N NUMBER  CURRENT PLAN OPERATION  0001
 Bottom of Data 

//S1EXEC  PGM=ICETOOL
//TOOLMSG DD SYSOUT=*
//DFSMSG DD SYSOUT=* 
//IN1  DD DISP=SHR,DSN=IMHOFAV.B.SRT3
//OUT1  DD DISP=(NEW,CATLG),DSN=IMHOFAV.B.SRT3.FORMAT,   
//SPACE=(CYL,(10,10),RLSE),  
//DCB=(RECFM=FB,LRECL=80,BLKSIZE=27920)  
//SPLCEDD DISP=(NEW,CATLG),DSN=IMHOFAV.B.SRT3.SPLICE,
//SPACE=(CYL,(10,10),RLSE),  
//DCB=(RECFM=FB,LRECL=80,BLKSIZE=27920)  
//TOOLIN   DD*   
* REFORMAT THE IN1 RECORDS FOR SPLICING  
COPY FROM(IN1) TO(SPLCE) USING(CTL1) 
SPLICE FROM(SPLCE) TO(OUT1) ON(72,8,ZD) KEEPNODUPS WITHANY - 
  WITH(1,8) WITH(10,8) WITH(19,8) USING(CTL1)
/*   
//CTL1CNTL DD *  
  OPTION COPY
  INREC IFOUTLEN=80, 
IFTHEN=(WHEN=GROUP,BEGIN=(33,22,CH,EQ,C'CURRENT PLAN OPERATION'),
RECORDS=3,PUSH=(73:ID=8)),   
IFTHEN=(WHEN=(6,14,CH,EQ,C'APPLICATION ID'), 
   BUILD=(1:8,8)),   
IFTHEN=(WHEN=(6,14,CH,EQ,C'JOB NAME  '), 
   BUILD=(10:17,8)), 
IFTHEN=(WHEN=(6,17,CH,EQ,C'APPLIED RUN CYCLE'),  
   BUILD=(19:25,8))  
  OUTFIL FNAMES=SPLCE,BUILD=(1,80)   
/*

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


DFSORT / ICETOOL SPLICE function

2022-09-30 Thread Andreas von Imhof
Hi 

I am trying to create 1 output record from multiple input records. Also the 
output record will be reformatted.
I have been trying to get the ICETOOL SPLICE function to work, but alas to no 
avail.
Please can someone help?

Reason for wanting to use SORT/ICETOOL is to reduce CPU consumption (current 
REXX runs several times per day against input files with millions of records). 
Also the REXX is difficult to maintain.
IF SORT can do this then it will be easy for the users to maintain.

From the input record  I want to scan for "CURRENT PLAN OPERATION" and select 
the 1st occurrence of APPLICATION ID / JOB NAME / WORKSTATION NAME. 
The rest of the records are discarded till the next occurrence of "CURRENT PLAN 
OPERATION".

Input data (output from TWS (old OPC):
Copy and paste into a dataset (lrecl 80) will restore the alignment)

+1+2+3+4+5+6+7+8
1   CURRENT PLAN OPERATION  
___ 
 APPLICATION ID   :ASC4D99A 
 INPUT ARRIVAL DATE   :220926   
 INPUT ARRIVAL TIME   :0645 
 OPERATION NUMBER :  10 
 AUTHORITY GROUP  :TEST 
 DESCRIPTIVE TEXT :Unl tablespacestats DBTZ 
 JOB NAME :C4D9910S 
 JOB ID   :JOB27971 
 WORKSTATION NAME :SONS 
 FORM NUMBER  : 
 PLANNED START DATE   :220926   
 PLANNED START TIME   :00064501 
 PLANNED END DATE :220926   
 PLANNED END TIME :00064502 
 OPERATION INPUT ARRIVAL DATE : 
 OPERATION INPUT ARRIVAL TIME : 
 OPERATION DEADLINE DATE/TIME :2209270644   


Desired output data:
Built from APPLICATION ID / JOB NAME / WORKSTATION NAME

ASC4D99A C4D9910S SONS


If needed I can upload a larger version of the input file to Google drive (not 
sure I can add an attachment here).

Thanks in advance and kind regards
Andreas (aka the now useless z performance specialist)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: R: zEDC compression on z14 and z15 by using ADRDSSU

2022-03-22 Thread Andreas von Imhof
Despite the fact that we have thousands of volumes we don't take full volume 
dumps anywhere, so I cannot give our comparison. We also don't have TAPE 
anymore. :-)

Perhaps open a ticket with IBM and let us know what IBM says?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEDC compression on z14 and z15 by using ADRDSSU

2022-03-21 Thread Andreas von Imhof
Have you looked at the amount of compression that you are getting when 
comparing the z14 to the z15?
In our shop SMF data went from about 6:1 compression (z14) to 10:1. This is a 
massive improvement.
I do not have the CPU stats, and I also really could not care. Dumps run at 
night when we have a massive amount of spare MSU available.

You wrote that you have 4000 volumes. Most of this is surely DB2 data (or 
another DBMS)? Don't you use image copies and let the applications determine 
when and where the copy should be made? Depending on the nature of the data, 
legal requirements may force you to manage the backups different data 
differently.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: TADz and SMF

2022-02-14 Thread Andreas von Imhof
Turning it off in SMFPRMxx is like putting your head in the sand. Turn it off 
in the application, just take a look at the install manual. These kind of "new" 
applications from IBM can burn a lot of CPU in generating those SMF records.
For example we have IBM's Information Broker. Someone turned on SMF logging for 
that on our dev LPAR's (they call it tracing). SMF data was only 30GB per day 
but CPU overhead was a massive 350 MSU per hour!
As far as the record type 131 goes, I think that that is what your colleague 
assigned in the install.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: WLM Guidance/Suggestions ! ! !

2020-02-03 Thread Andreas von Imhof
What Kees has written is correct. My rule of thumb is the faster the CP the 
lower the velocity. With CPU upgrades I typically revise the velocity down.
It would help us if you would post a snapshot of the SUM report in RMF III, 
then we have all the info and not just a tiny bit and we can help you more 
accuately.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN