On 05/23/2013 09:44 AM, Thomas Berg wrote:
To clarify.

I wrote:

"What is REALLY needed is to get rid of the absurd requirement to specify the 
amount of storage to allocate for datasets!

The system should allocate/reallocate according to what is needed in the 
actual/immediate need for the dataset without dumping the problem to the user!  (Of 
course limited by appropriate resource constraints.)"


With that I meant:

1. That there is many cases where an unspecified/undeterminable amount of space 
is needed for an allocation.

2. That because of the "undeterminable" status (which can be because it's 
exactly undeterminable or not possible to spend the amount of time to make it 
determinable) you can't make any rule of how much space it will request at run time.

3. That any ACS rules or other techniques (other than products that catches a 
x37 at run time) therefore is of limited help.

4. That if IBM have a function that catches an out of space condition and 
extend the allocated space to the current need it would save an enormous amount 
of time spend at correcting the error and rerun the jobs.

5. That *limiting* the use of space should not depend on what is written in the 
SPACE parm.  Rather that it's something that any sort of quotas (ACS maybe) 
connected to the userid and datasetname should handle.

6. This is - more or less - how it works in e g the Unix world.  And they are 
maybe not insane ?



Regards
Thomas Berg
____________________________________________________________________
Thomas Berg   Specialist   z/OS\RQM\IT Delivery   SWEDBANK AB (Publ)



The problem is in the definition of "appropriate resource constraints". It needs to be something more complex than just putting a total space constraint on an individual user, as even individual users engage in actions of different priority and different degrees of certainty. To me it would be inappropriate if a typical debugging run caught in a loop could exhaust a users space allotment and then cause that user's TSO session or his batch jobs doing standard compiles to fail. I would also find it inappropriate if the defaults did not provide a reasonable way to communicate to a user that he has greatly exceeded "usual" DASD space usage patterns, because more often than not this is an indicatioh of a "bug" or some error in judgement that needs to be addressed rather than throwing more resources at the job. There needs to be some kind of multi-tiered approach where the type of job step and/or user specified categorization of data sets enters into the limit determination for individual data sets, and while allowing mechanisms for "unusally large" data sets perhaps also at some point require manual intervention of some kind to allow a process to allocate more space and continue, when the size of such data sets is seen as a potential threat to future allocation by either that user or other users.

Users should not have to worry about how many volumes are required, or about limitations on number of extents per volume, or about volume fragmentation; but data set allocation limits must still protect users and jobs from each other and not needlessly waste resources in over allocation. In a z/OS environment that may service many different groups of "loved ones", overall availability of the system is more important than just the convenience of an individual user.

--
Joel C. Ewing,    Bentonville, AR       [email protected] 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to