Acutally, if the jobs are running consecutively, at pretty much the same
time, PAV will not be that great a help.  The PAV is designed for multiple
accesses on different extents.  Since all the jobs will be acessing the same
extent range, only one UCB will probably be used.

I like the idea of using HiperBatch the best.  Work up a job to load the
dataset into hiperbatch buffers, then let the 35 jobs go at it.  When done,
just unload the hiperbatch stuff.

On Mon, 1 May 2006 17:01:19 +0200, Vernooy, C.P. - SPLXM
<[EMAIL PROTECTED]> wrote:

>"Bob Shannon" <[EMAIL PROTECTED]> wrote in message
news:<[EMAIL PROTECTED]>...
>> >"How many jobs can access a DISK DSN with a DISP=SHR before any
>> >performance degradation occurs due to access contention?
>>
>> >"Meaning, We have 1 dataset sitting on disk, we have 35 jobs that need
>> >to access this one dataset, how many jobs can run at one time accessing
>> >this one dataset before the access creates contention and stops jobs
>> >from running?"
>>
>> >All of the access would be read-only, and AFAIK the dataset is
>> non-VSAM.
>>
>> Sounds like an ideal environment Hiperbatch. It's still supported,
>> and this sounds like the exact scenario it was created to address.
>>
>>
>>
>>
>
>
>Sounds also like an ideal environment for PAVs.
>
>Kees.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to