On 02/13/2012 09:19 AM, Chris Craddock wrote:
On Mon, Feb 13, 2012 at 8:56 AM, Paul Gilmartin<[email protected]>wrote:

On Mon, 13 Feb 2012 07:21:11 -0600, McKown, John wrote:

Or, as the programmers at our shop would do:


SPACE=EAT-EVERYTHING-IN-SIGHT-AND-CAUSE-OTHER-JOBS-TO-ABEND-BECAUSE-MY-STUFF-IS-IMPORTANT-AND-YOUR-STUFF-ISNT.

In many other systems, such as Winblows, everybody gets their own
personal "space". And if it is "used up", it doesn't impact others. z/OS
shares DASD space.  ...

The z/OS cultural norm for HFS and zFS is to give each user a
dedicated filesystem for HOME.  This is similar to the behavior
of personal instances of those "other systems".



I think it is fair to say that JCL and space management are areas where
z/OS truly is archaic. The "other" world manages to get by just fine
without having to figure out how much resource to give. There's no reason
z/OS couldn't do the same other than slavish adherence to legacy. IMHO it
is about time the system itself took care answering its own incessant "how
big?", "how many?", "how often?" questions. It's 2012 ferpetesakes. I'm all
in favor of making sure that existing applications continue to work. I am
far less impressed with continuing to impose 1960s thinking on new ones.


Requiring application programmers to think in terms of tracks and cylinders and to understand interaction between physical block size and track capacity is indeed archaic, as are artificial restrictions on number of extents or volumes. Prior to emulated DASD and large DASD cache sizes, space/allocation sensitivity to tracks and cylinders was frequently necessary for performance reasons, but that is no longer the case. It should be possible to just specify data set limits in terms of data bytes expected or records/average-record-length expected without regard for tracks, cylinders, extents, or volumes. And given some simple mechanism for specifying such limits, z/OS should also provide support for monitoring whether application growth is causing data sets to be at risk of exceeding their limit. Restricting sequential data set allocation to record allocation of SMS Extended Sequential data sets with space constraint relief and SD block size comes close, but is an incomplete solution and only works for sequential files.

The MVS allocation strategy, which generally requires dynamic secondary extensions to data sets when the size exceeds what can reliably be obtained on a single volume, has always been flawed. Even when the exact size of a large data set was known in advance, there was never a guarantee that space for required secondary extensions would be available on the selected volumes. In effect there was no easy way to convey to z/OS via primary/secondary specifications what the true limit of the data set should be because the actual maximum number of secondary allocations was always an unknown, with no guarantee at the beginning of step execution that even one dynamic secondary could be allocated on any of the chosen volumes.

Perhaps an awareness of total data bytes involved in a data set is non essential for data sets below some (installation-dependent) total-byte threshold; but at some point for larger data sets those developing the batch application should have an awareness of approximate data set size and records involved so that concurrent space requirements for a job step may be at least approximated up front; and so application programmers don't choose an approach that might be appropriate for a toy application of 1000 records but totally inappropriate for a production application with a million records. If a required batch application is going to consume a significant percentage of the total DASD farm, there also needs to be some means for awareness of that, as it will impact job scheduling and capacity planning.

The z/OS fixation on requiring data set SPACE specification for allocation rather than using some totally dynamic approach is no doubt an outgrowth of the desire for MVS to reliably support unattended batch and, as others have mentioned, to prevent one looping batch job from causing termination and denial-of-service of other unrelated jobs by exhausting available DASD space. Properly designed JCL SPACE parameters (which admittedly takes some effort) can also ABEND a batch job step up front if sufficient DASD space does not exist for successful completion -- much more desirable than allowing a batch job step to run for hours and consume valuable resources, and then blow up because space for further secondary allocation is unavailable. Operating systems that don't require space estimates for large file allocation are implicitly saying that reliable running of unattended "batch" processes is of lesser importance.

--
Joel C. Ewing,    Bentonville, AR       [email protected] 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to