As with almost everything else, the answer is it depends.  For "large"
sequential datasets as you described, I agree.

But consider a card image PDS (JCL, source, etc) where most members average
around 400 records.  Half block on a 3390 is 349 records.  The first member
will occupy 2 blocks on the first track.  The second member's first block
will not fit on the track.  The net result is that 300 records worth of
space on track 1 is wasted.

With a blocksize of 3120, 15 blocks fit on a track for a total of 585
records.  Almost a 50% improvement.  My preference is 6160 (reasonably
efficient for both 3390 and 3380 - yes we have very old archived datasets
which occasionally must be restored) which will allow 616 records per track.

We previously had a similar discussion for load modules where 32760 is
optimal for the Binder but not necessarily for IEBCOPY.



:>: -----Original Message-----
:>: From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
:>: Behalf Of Eric Bielefeld
:>: Sent: Tuesday, July 23, 2013 10:21 AM
:>: To: IBM-MAIN@LISTSERV.UA.EDU
:>: Subject: Re: BLKSIZE=3120
:>:
:>: I believe that the net result of coding smaller blocksizes does result
:>: in
:>: being able to store less data.  If you had 1,000 volumes all defined as
:>: 3390-9s, and each volume had 100 datasets that filled the volume blocked
:>: at
:>: 512 bytes, you would store a fraction of the data if you blocked each of
:>: those datasets at 1/2 track blocking.  That is a function of the z/OS
:>: archictecture.
:>:
:>: I don't know exactly how the data is stored on the tracks, but I believe
:>: that the result of smaller blocksizes means that you will store a lot
:>: less
:>: data.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to