Back in the "olden days" of HPO and when we ran a very large set of PROFS
users, we ran into occasional SPOOL full conditions.
At that point I created "READYUR EXEC Y2":
/* READYUR EXEC */
'CP READY 00C'
'CP READY 00D'
'CP READY 00E'
'CP READY 009'
EXIT 0
It was easy to tell users to execute that (easier than having them issue
the commands themselves, and faster than having them logoff and logon -
esp. time lost due to possibly thousands of logoffs/logons back then).
Certainly now it would look more like:
/* READYUR EXEC V2.0 */
'PIPE (NAME READYUR)' ,
'| CP QUERY VIRTUAL UR' ,
'| SPECS 6.4 1' ,
'| UNIQUE WORD 1' ,
'| SPECS /CP READY/ 1 WORD 1 NW' ,
'| CP' ,
'| CONSOLE'
Exit 0
And one could, with a little work, either 'CP SEND' that command to
everyone logged on, or determine the virtual output devices that each
logged on user has "NOTREADY" at the moment and issue the specific 'CP
SEND CP some_ID READY vdev' commands as needed.
Mike Walter
Hewitt Associates
The opinions expressed herein are mine alone, not my employer's.
"VM/ESA and z/VM Discussions" <[email protected]> wrote on
11/08/2005 11:30:38 AM:
> Some of you history buffs with access to really old manuals can
> correct me if I am wrong, but didn't HPO4 or HPO5 handle spool
> filling up by putting output devices in NOTREADY? If that is
> correct, it may even have been a VM/SP thing.
>
> Alan has pointed out the major headache you can have. CP doesn't
> crash, but some service machines will probably crash, others will
> not do as they are supposed to, and many users will be completely
> confused if it happens. We have had it happen in spite of the fact
> that we have 9 full-pack 3390-03s plus a separate DUMP volume
> supporting only about 270 concurrent users. It was easiest to simply
> restart most service machines as all have their consoles spooled.
> Over 100 TPF tests in progress had to be restarted as the NOTREADY
> of spooled devices caused TPF to start taking dumps, exacerbating
> the problem.
>
> Regards,
> Richard Schuh
>
>
> > -----Original Message-----
> > From: VM/ESA and z/VM Discussions [mailto:[EMAIL PROTECTED]
> > Behalf Of Alan Ackerman
> > Sent: Monday, November 07, 2005 9:29 PM
> > To: [email protected]
> > Subject: Re: Q alloc SPool Help BINGO!!!!!!! - now we are on to
> > something!
> >
> >
> > On Mon, 7 Nov 2005 17:49:28 -0600, Stephen Frazier
> > <[EMAIL PROTECTED]> wrote:
> >
> > >On z/VM 3.1 Q PRT ALL shows open spool files. I don't
> > remember when that was
> > >added (sometime in VM/ESA I think) but it was a lot of help
> > diagnosing spool
> > >filling problems. Now VM doesn't crash when the spool gets
> > full. It stops users
> > >that are adding to the spool.
> >
> > Careful, though -- when the spool fills up, user programs get
> > a NOTREADY from the virtual printer or
> > punch. Very few programs are equipped to handle this. The
> > usual result is the famous "unpredictable
> > results". This also happens when a user exceeds the limit of
> > 9900 spool files for any one user. (Why
> > 9900?)
> >
>
The information contained in this e-mail and any accompanying documents may
contain information that is confidential or otherwise protected from
disclosure. If you are not the intended recipient of this message, or if this
message has been addressed to you in error, please immediately alert the sender
by reply e-mail and then delete this message, including any attachments. Any
dissemination, distribution or other use of the contents of this message by
anyone other than the intended recipient is strictly prohibited.