Yes, that was and still is true.

So spool file full, doesn't directly cause a CP abend, but it does
indirectly....

The size of your dump area, is determined by the memory locations that
CP has "ever" used.  For example, when CP needs 10 pages from the
Dynamic Paging Area, if it never touched those pages before, you dump
file expand by those 10 pages.  When CP releases those pages, those
pages are not, repeat "NOT" released from your dump file requirements. 
So, if the next time CP needs a few pages, if it uses the pages it used
before, those pages have already been marked as pages to be written to
the dump file, in case of an abend.  However, if they are never befored
allocated pages to CP, even if there has been freed pages before, your
dump area will expand to include those pages in the dump in case of a CP
abend.

So, if you run long enough, all 2 GB of real memory would have been
allocated to CP (and freed).  That would mean your dump file would grow
to 2 GBs.  (minus any V=R area or any locked pages by guest machines)

If your dump file is allocated from spool and spool is full and CP
tries to allocate a "never before allocated to CP" page?  It can't
expand the dump dataset and CP abends.

So, indirectly, if you don't have a "dump" area, equal in size to your
main memory (up to 2 GB) in an area of type "dump" (not spol), you are
subject to a CP abend when spool file fills.

Apparently the 2 GB max will still be the case with z/VM 5.2.  CP will
still allocate below the line.  It is I/O that now can be moved above
the line with z/VM 5.2.


Tom Duerbusch
THD Consulting

>>> [EMAIL PROTECTED] 11/08/05 11:30 AM >>>
Some of you history buffs with access to really old manuals can correct
me if I am wrong, but didn't HPO4 or HPO5 handle spool filling up by
putting output devices in NOTREADY? If that is correct, it may even have
been a VM/SP thing.

Alan has pointed out the major headache you can have. CP doesn't crash,
but some  service machines will probably crash, others will not do as
they are supposed to, and many users will be completely confused if it
happens. We have had it happen in spite of the fact that we have 9
full-pack 3390-03s plus a separate DUMP volume supporting only about 270
concurrent users. It was easiest to simply restart most service machines
as all have their consoles spooled. Over 100 TPF tests in progress had
to be restarted as the NOTREADY of spooled devices caused TPF to start
taking dumps, exacerbating the problem. 

Regards,
Richard Schuh


> -----Original Message-----
> From: VM/ESA and z/VM Discussions
[mailto:[EMAIL PROTECTED] 
> Behalf Of Alan Ackerman
> Sent: Monday, November 07, 2005 9:29 PM
> To: [email protected] 
> Subject: Re: Q alloc SPool Help BINGO!!!!!!! - now we are on to
> something!
> 
> 
> On Mon, 7 Nov 2005 17:49:28 -0600, Stephen Frazier 
> <[EMAIL PROTECTED]> wrote:
> 
> >On z/VM 3.1 Q PRT ALL shows open spool files. I don't 
> remember when that was
> >added (sometime in VM/ESA I think) but it was a lot of help 
> diagnosing spool
> >filling problems. Now VM doesn't crash when the spool gets 
> full. It stops users
> >that are adding to the spool.
> 
> Careful, though -- when the spool fills up, user programs get 
> a NOTREADY from the virtual printer or 
> punch. Very few programs are equipped to handle this. The 
> usual result is the famous "unpredictable 
> results". This also happens when a user exceeds the limit of 
> 9900 spool files for any one user. (Why 
> 9900?)
> 

Reply via email to