In a recent note, Bruce Black said:

> Date:         Tue, 3 Oct 2006 09:56:14 -0400
> 
> >
> > Forget it.  PDSEs don't have blocks.
> Sure they do, they are always 4K.  The blocksize of the PDSE is emulated
> when it is read with a standard access method
> 
Are they called "blocks" or "pages" nowadays?

OK.  That was hyperbole on my part.  A fairer statement is that
the specified BLKSIZE has no effect on the representation of data
on backend storage, thus none on the utilization of space, and
probably little on channel bandwidth or EXCP overhead.  This
frees developer resources for more productive uses than matching
BLKSIZE specification to device geometry.  BLKSIZE is retained in
large part for compatibility with Classic programs that expect
OPEN to put something plausible in DCBBLKSI.

As Terry Sambrooks and others have pointed out, a larger BLKSIZE
specification decreases the overhead for simulated end-of-block
logic.  But I must wonder whether larger buffers lead to larger
WSS, and increased paging I/O.  But the access method designers
would likely point out that's not Shmuel's dog.

Is there any value in specifying a value other than BUFNO=1 for
a PDSE?  Why?

Deep within the PDSE logic, there must be a number of 4Ki page
buffers for conversion to logical blocks.  Is the number of page
buffers used affected by the specification of BLKSIZE?

If multiple jobs open the same PDSE member for READ, are those
page buffers shared?

4Ki seems surprisingly small compared to current recommendations
for BLKSIZE for other data set types.

(Shame on me!  Now I'm fretting about implementation details.)

-- gil
-- 
StorageTek
INFORMATION made POWERFUL

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to