Hi,
I have used SAR at many sites and currently still do at several. If you are
writing these all to the same SAR database, then you are not really going to
get better throughput from a single SARSTC (archive task writer) than you will
from 10 or 20 of them. They are writing to the same database, which is
essentially a big sequential file with a INDEX file that indexes into that
"database".
What you can do is have multiple SAR database(s) and write the data to them
individually. I'm surprised that CA didn't tell you this stuff already, but
you can have each system write to it's own database. You can also create
multiple extents for the index and database components, the first extent is
what will be used for ENQ's so it can be small and should be on a separate
(two) volumes. The databases are available cross-system to the viewer(s).
Basically, you are just trying to let the archive tasks not have to compete
with each other for access to the database portion. The Index is only
necessary at the beginning and end of the extracts and is held briefly both
times.
If there is very high access, and you have a way to separate the data between
them then completely separate database and indexes would be a great solution
for this. i.e. have payroll stuff go to one and accounts payable go elsewhere.
There are several ways to mitigate the issue, but in the end, the database is
still a sequential file with an index into it.
FSS collectors may not be a option for you, but if they are, you would get a
small amount of relief.
"The SARSTC task must complete archival of a SYSOUT before starting archival of
the next sysout. The
single threading can lead to a backlog of reports in the JES spool. To improve
this situation, additional
FSS/VIEW archiver collectors can be defined to JES to allow multiple SYSOUTs to
be archived
concurrently."
In the install manual they also tell you about this problem:
RESERVES are issued against both the first index extent and the first data
extent; therefore,
a review of local configurations is required.
The product issues ENQs and RESERVEs as necessary to maintain the integrity of
its data sets. The
primary ENQ (QNAME=SARSTC) is used by the archival task to ensure that only one
archival task
starts using a specific database. The ENQ is defined as SYSTEMS which will be
propagated to all LPARs
in a PLEX. This queue name need not be defined to a system integrity product. A
secondary ENQ
(QNAME=SARPAC) is used by the tape consolidation utility SARPAC. This is also
defined as SYSTEMS
and need not be defined to a system integrity product.
Convert all RESERVEs issued by CA View to global enqueues.
Using RESERVE and ENQ
The following table shows how CA View uses ENQ and RESERVE:
QNAME Type Description
Integrity Product Control
SARSTC ENQ Restricts the CA View database to only one archival task
NO
SARPAC ENQ Restricts the CA View SARPAC utility to only one task at a time.
NO
SARACT RESERVE Serializes the updating of the CA View accounting file
YES
SARUPD RESERVE Serializes the updating of the CA View database and index
YES
any time you serialize something, only one will be able to update at a time.
To take the star out of the issue, you can have only one JES do all of the
updating, but having multiple database/index setups is a far better option.
CA even makes a blanket statement on this in the reference:
Run Multiple Archival Tasks
Multiple archival tasks can be run at the same time; however, each task must
use a different database.
The archival task ENQs on the high-level qualifiers of the database name ensure
that a different database is used.
Important! For sites that have more than one processor, ensure that multiple
archival tasks with the same database do not run at the same time on different
processors. Otherwise, you can do permanent damage to the database. CA View
prompts the operator for verification if another task with the same database is
already executing on another processor.
Meet the following requirements when running multiple archival tasks:
• A different database is defined and used for each task
• A different recovery file, if used, is defined and used for each task
You don't have to worry about damage, because you have GRS up, but it still
won't be fast to have them all go to the same place.
Let me know if you have any more questions.
Brian
On Thu, 8 Apr 2021 02:10:25 +0000, A T & T Management <[email protected]>
wrote:
>
>Just some thoughts: You have 23 collectors and all are going to the same
>dataset? How busy is the dataset? Also, how busy is the JES2 spool? Just
>thinking 23 collectors, x number of jobs dumping in to the spool Wondering if
>JES2 fencing would be beneficial? I.e. Creating a few more JES spools on
>different volumes where jobs would be going to the various new spool datasets
>and jobs going ouot to the new JES2 spool datasets. I wonder if CA is using
>the new method of extracting spool datasets are in use or they doing their
>own, or perhaps using external writers to get the spool datasets?
>Scott
> On Wednesday, April 7, 2021, 8:36:50 PM EDT, Glenn Miller
> <[email protected]> wrote:
>
> I have a few questions that I wanted to ask the group regarding their thruput
> expectations and experiences with the CA-View(SAR) software product.
>
>First, I should say that my customer has been using the CA-View(SAR) software
>product for a number of years and until recently has basically not had any
>complaints/issues. However, within the past few months, we consistently
>receive complaints that are basically saying..."CA-View(SAR) is slow" or
>"CA-View(SAR) is working as fast now as it used to be". These complaints are
>basically indicating that the CA-View(SAR) SYSOUT Collectors are not able to
>immediately "process" all of the SYSOUT from the JES2 Spool and store the
>output on the CA-View(SAR) disk database at "certain" timeframes. For
>example, last night they had a high-water mark of over 14,000 jobs in the JES2
>SYSOUT Class that is used by the CA-View(SAR) SYSOUT Collectors. We have
>working with CA/Broadcom CA-View(SAR) support and as of this time, based on
>the data we have provided, they do not see any obvious areas of concern.
>
>Is it reasonable to expect the CA-View(SAR) SYSOUT Collectors to be able to
>"process" the SYSOUT from multiple hundreds of jobs per minute? Or is this an
>example of trying to shovel 10 pounds of "stuff" into a 1 pound can?
>
>After doing some investigation ourselves, we have found that during those peak
>timeframes that the CA-View(SAR) SYSOUT Collectors ( we have 23 of them
>running, all selecting SYSOUT from the same JES2 SYSOUT Class ) spend nearly
>the entire time ( in the very high 90's % ) waiting for the Global ENQ ( we
>are running GDPS Hyerswap, using GRS Star Mode ) for the resource "SARUPD" /
>"hlq.SARDBASE.I0000001". We are not experiencing other issues that would
>indicate that GRS is having any issues.
>
>A little information about the configuration of this environment.
>
>Three z13 machines, one Model 725, one Model 726, one Model 727
>Three zEC12 external coupling facility machines
>One DS8886 (no flash storage) "GDPS Site 1" / One DS8886 (no flash storage)
>"GDPS Site 2"
>10 way Sysplex / all z/OS V2R3 systems / 8 customer application z/OS systems /
>2 GDPS z/OS systems
>The 23 CA-View(SAR) SYSOUT Collectors have always run on only 1 of the 8
>customer application z/OS systems
>
>
>Any thoughts/ideas/experiences would be appreciated.
>
>Thank you in advance,
>Glenn Miller
>
>----------------------------------------------------------------------
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to [email protected] with the message: INFO IBM-MAIN
>
>
>----------------------------------------------------------------------
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to [email protected] with the message: INFO IBM-MAIN
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN