Hi Mike,

>> Mike Baker <[EMAIL PROTECTED]> wrote:  If we have lots of
>> redundant datasets on the machine, and many HLQ (high level
qualifiers) >> which could (also) be completely removed, but have not
been removed /
>> cleaned up, is this likely to have much of a performance degradation
>> affect on the Catalog / CAS.  

Mike, I'm not certain I fully understand the description of your
situation.  For example, how many is "lots", and what exactly is a
"redundant dataset"?

Nevertheless, however I interpret the situation, my opinion is that it
does not affect the performance of either the catalog or CAS.  

The catalog is physically a VSAM KSDS, accessed directly by key, and
even if it contains thousands (even tens of thousands) of useless
records, the speed of access to any record in the catalog will not be
affected by the "redundant" records.  Most (possibly all) of the
catalog's index will/should be in CAS buffers, and therefore, accessing
any record in the catalog's data component will require just a single
I/O (at most).

The same answer applies to CAS performance  --  there's no way that
redundant catalog records can affect CAS performance.  Assuming by
"redundant" data set you mean a cataloged data set that doesn't actually
exist, this redundant record would never be read in the first place, and
would never find its way into CAS.  Since it would never be read, it
wouldn't "take up space" in CAS buffers, as records are brought into CAS
only by specific request when a task attempts to locate a data set.  By
your question, I'm guessing that you possibly believe all of a catalog's
records are somehow buffered in CAS, regardless of being specifically
requested, and that's not true.  

In my opinion, the single biggest performance benefit you can give your
catalog(s) is to turn on VLF (the Virtual Lookaside Facility) within CAS
(which is specified in the COVLFxx member of SYS1.PARMLIB, and can be
checked by a MODIFY CATALOG,PREPORT,VLF operator command).  This topic
has been discussed many times on this Listserv, and can be found in the
archives.  There's also very good and extensive information on this in
the z/OS DFSMS: Managing Catalog manual, SC26-7409.

Having said that, cleaning up "redundant" data set records  --  for
example, entire HLQ levels of useless data set entries that haven't been
cleaned up  --  makes your catalog at risk of significantly greater
problems when/if you have a catalog problem which requires diagnostic
analysis.  Any attempt to run diagnostics on this catalog will likely
identify "lots" (your word) of useless records that just clutter up the
true status of the catalog.  By not cleaning up these entries, you're
potentially creating a bigger problem for yourself at some later time
--  and if that time is when you have an outage on the catalog and it
results in longer recovery time, you may have critical applications
delayed while you struggle with a "dirty" catalog.

I hope I've shed some light on your question.  If I'm on a tangent and
don't understand what you're asking, drop me a note (on the Listserv, or
privately).

Ron Ferguson
President and CEO
Mainstar Software Corporation
www.mainstar.com
[EMAIL PROTECTED]

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to