Some of you history buffs with access to really old manuals can correct me if I 
am wrong, but didn't HPO4 or HPO5 handle spool filling up by putting output 
devices in NOTREADY? If that is correct, it may even have been a VM/SP thing.

Alan has pointed out the major headache you can have. CP doesn't crash, but 
some  service machines will probably crash, others will not do as they are 
supposed to, and many users will be completely confused if it happens. We have 
had it happen in spite of the fact that we have 9 full-pack 3390-03s plus a 
separate DUMP volume supporting only about 270 concurrent users. It was easiest 
to simply restart most service machines as all have their consoles spooled. 
Over 100 TPF tests in progress had to be restarted as the NOTREADY of spooled 
devices caused TPF to start taking dumps, exacerbating the problem. 

Regards,
Richard Schuh


> -----Original Message-----
> From: VM/ESA and z/VM Discussions [mailto:[EMAIL PROTECTED]
> Behalf Of Alan Ackerman
> Sent: Monday, November 07, 2005 9:29 PM
> To: [email protected]
> Subject: Re: Q alloc SPool Help BINGO!!!!!!! - now we are on to
> something!
> 
> 
> On Mon, 7 Nov 2005 17:49:28 -0600, Stephen Frazier 
> <[EMAIL PROTECTED]> wrote:
> 
> >On z/VM 3.1 Q PRT ALL shows open spool files. I don't 
> remember when that was
> >added (sometime in VM/ESA I think) but it was a lot of help 
> diagnosing spool
> >filling problems. Now VM doesn't crash when the spool gets 
> full. It stops users
> >that are adding to the spool.
> 
> Careful, though -- when the spool fills up, user programs get 
> a NOTREADY from the virtual printer or 
> punch. Very few programs are equipped to handle this. The 
> usual result is the famous "unpredictable 
> results". This also happens when a user exceeds the limit of 
> 9900 spool files for any one user. (Why 
> 9900?)
> 

Reply via email to