CICS by default produces giant volumes of SMF data, and much of it is
probably never even looked at. I've seen sites where CICS was
responsible for well over 80% of the SMF data, and all they extracted
from it was transaction cpu and response time, a few bytes worth of data
per kilobyte produced. 

As Roland pointed out, this can be cut down enormously by coding CICS
MCTs (monitoring control table) to only record the data required. 
 

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Duncan Walker
Sent: Friday, March 02, 2007 7:18 AM
To: [email protected]
Subject: Re: How are you handling high SMF record volume?


We were also having a lot of problems with the large volumes of SMF data
we 
produce (though it was CICS data we us) and associated contention
issues. 
What we did was increase the size of the MANx datasets so they could
never 
fill them in 30 mins and have our DUMPXY's run every 30 mins to clear
them 
out. The DUMPXYs write to system specific GDGs, creating a +1 every half
an 
hour. At end of day, we run a job that mods all the GDGs to a single 
cartridge as one large system specific file and then another job read in

the cartridges datsets for all our LPAR's and writes them back to DASD, 
splitting them into CICS, DB2, etc GDGs for further processing...

The benefits are...

1) We don;t have to process those large volumes of CICS records to look
at 
RMF records etc
2) The GDG's created every half an hour mean that these can be read on
the 
day they were created without impacting SMF collection (by referencing
them 
using the absolute generation). Very handy for problem determination.
3) Also, these half an hour GDGs mean it's very easy to resolve any
invalid 
records caused by space problems etc (not that any one should get space 
issues in this day and age! but...). Our daily file used to be some
where 
in the region of 15,000 cyls and running a sort to pick out one dodgy 
record was a long winded operation to say the least!
4) Using REXX to find out the absolute GDG names and using that info as 
input to our nightly dump job means we can backup yesterdays data
without 
impacting the collection process...

Cheers, Duncan

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to