You say that the problem happens when all the tasks terminate. Your problem is 
with not enough LQSA for termination. During termination a number of RBs are 
getmained by RTM to handle termination - like an RB that your ESTAE gets 
control under (a PRB, IIRC). Or a PURGEDQ SVRB. Depending on what your ESTAE 
does, you'll need more LSQA for further stuff. 

I don't have a rule of thumb how much LSQA is needed per TCB. Given that you 
say you create 1000 tcbs, and each tcb creates at least one subtask, we're 
talking at least 2000 TCBs. Plus their associated RBs and CDEs. I'd guess that 
you need at least 6MB below the line of storage reserved for LSQA, possibly 
more. The only way to do that is to write a custom IEFUSI that really reserves 
that much LSQA especially for your job. Or the equivalent parmlib member.

Remember that LSQA 'grows' from top of region below downwards while private 
storage 'grows' from bottom of region upwards. So conditional getmains don't 
help here IMHO. You would have to determine current top of region 
programmatically and then subtract 1-2MB for termination and then check if 
you've still got enough room to do your getmain.

Anecdote: Before IBM introduced command classes and all the messages that go 
with too many commands issued at the same time there used to be regular wait 
states (wait07E, IIRC) due to LSQA exhausted in *master*. Commands execute in 
*master* (for the most part). Too many commands at the same time generated the 
exact same situation you currently have - not enough LSQA left. Which is really 
deadly when it happens in asid 1. IBM only allows 100 commands per class these 
days. If more are issued, they get held back until there's 'room' again to have 
them execute.

Why do you need 1000 tcbs?

Regards, Barbara

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to