For every byte of real memory, you need 1 byte of paging disk to back it up.
For a system dump, you need to copy the real memory and the paging
disk backup, so 3X real memory.

On Tue, Apr 18, 2017 at 2:18 AM, Vernooij, Kees (ITOPT1) - KLM
<[email protected]> wrote:
> I wonder what the magic 1:1, or in fact any hard relation with real memory is.
> Basically, in virtual memory systems, what does not fit in real must be paged 
> out. So in principle you need the potential total virtual memory minus the 
> real memory as aux storage.
>
> There are a number of ROTs to predict your needed aux, but I don't see the 
> hard 1:1 relation with real memory.
> I usually check the AUX utilization over the last year or so and see if we 
> structurally over- or underallocated AUX.
>
> Do take a large period, because you can have months of moderate AUX 
> utilization and at some point in time, you get a problem which dumps a large 
> DB2 system together with some address spaces and that is the moment that you 
> must have your spike AUX available.
>
> Kees.
>
> Kees.
>
>> -----Original Message-----
>> From: IBM Mainframe Discussion List [mailto:[email protected]] On
>> Behalf Of Turner Cheryl L
>> Sent: 13 April, 2017 22:41
>> To: [email protected]
>> Subject: Re: Paging subsystems in the era of big&^% memory
>>
>> We have a totally different issue but I feel still related. I even
>> opened a service request with IBM, whom sent us off to the manuals and
>> it as clear as mud to read the fine manual, techdocs, etc. and get a
>> precise answer. Our problem is we have constrained ESQA/ECSA on one of
>> our LPARs. We are a large DB2 shop.  Recent retirements and lost years
>> of knowledge in this arena have left those of us that remain scratching
>> our heads.
>>
>> Years ago Sam Knutson said something to the list to the effect of "DASD
>> is cheap so back up real 1:1", if I'm remembering it right.  And we have
>> been doing that for many, many years in addition to adding IBM
>> recommendations of 30% and having at least 3 local datasets (one techdoc
>> mentions 5 min) whether needed or not.  Every time we have ever added
>> real storage, we upped the number of locals.
>>
>> If we were to follow the 1:1 ratio now, we would be increasing ESQA even
>> more. For example: Our MAXSPACE=16000M and have allocated  512 G real
>> memory. We currently have 9 3390-27 local page datasets defined for that
>> system.  Our past rule of thumb would suggest that 11 entire 3390-54
>> local page datasets are actually required.
>>
>> What we aren't sure of is if we are extremely oversized and can and
>> should back our current paging subsystem allocation down some to recover
>> some of the ESQA currently in use. Did I mention, we hardly, if ever,
>> page?   I know, that's a good thing and if it ain't broke don't fix it
>> but the ESQA issue really has us looking at every little savings. How do
>> I get a handle on what really is needed when we increase real storage to
>> ensure we: 1) don't constrain our virtual storage any more, 2) continue
>> successfully not paging, 3) understand why and when we will need to up
>> the sizing of our subsystem.
>>
>>  I'm glad I too was following this thread so that I was able to send our
>> DB2 sysprogs the recommendation that Art Gutowski (THANK YOU!!)
>> forwarded to the group below to possibly help with our problem.
>>
>> Appreciate any assistance anyone has or if anyone has ran into this
>> situation in the past. From reading and internet trolling, it looks like
>> not many have a handle on what it takes to get this right and is well,
>> downright confusing.
>>
>> Cheryl
>>
>>
>>
>> -----Original Message-----
>> From: IBM Mainframe Discussion List [mailto:[email protected]] On
>> Behalf Of Jesse 1 Robinson
>> Sent: Thursday, April 13, 2017 3:27 PM
>> To: [email protected]
>> Subject: Re: Paging subsystems in the era of bigass memory
>>
>> This thread did seem to morph into a focus on DB2, but the paging
>> problem for us is not confined to DB2. We have one utility system that
>> was set up years ago to be a 'CMC'. It's still dedicated to 'network
>> stuff', which for some time has narrowed down to CA-TPX, the SNA session
>> manager. Very little else runs there. Certainly no DB2 or CICS.
>> Absolutely no end-user apps. We've sort of ignored this system recently
>> as we turned attention elsewhere. It was last IPLed in January 2016,
>> well over a year ago! It runs great except for this burr under the
>> saddle. The local volumes are all Mod-3. Whatever we decide to do about
>> DB2 will not help here.
>>
>> -  IEE200I 11.29.28 DISPLAY ASM
>> -  TYPE             FULL     STAT
>> -  PLPA             100%    FULL
>> -  COMMON     36%     OK
>> -  LOCAL            53%     OK
>> -  LOCAL            49%     OK
>> -  LOCAL            43%     OK
>>
>> .
>> .
>> J.O.Skip Robinson
>> Southern California Edison Company
>> Electric Dragon Team Paddler
>> SHARE MVS Program Co-Manager
>> 323-715-0595 Mobile
>> 626-543-6132 Office ⇐=== NEW
>> [email protected]
>>
>>
>> -----Original Message-----
>> From: IBM Mainframe Discussion List [mailto:[email protected]] On
>> Behalf Of Mike Schwab
>> Sent: Wednesday, April 12, 2017 8:18 PM
>> To: [email protected]
>> Subject: (External):Re: Paging subsystems in the era of bigass memory
>>
>> Here is an IBM presentation on how to tune z/OS and DB2 memory,
>> including some parameters to set.
>> http://www.mdug.org/Presentations/Large%20Memory%20DB2%20Perf%20MDUG.pdf
>>
>> On Wed, Apr 12, 2017 at 2:55 PM, Art Gutowski <[email protected]>
>> wrote:
>> > Did someone on this thread say DB2??
>> >
>> > We have been experiencing similar AUX storage creep on our DB2
>> systems, particularly during LARGE reorgs (more of a gallop than a
>> creep).  Our DB2 guys did some research, opened an ETR with IBM, and
>> found this relic:
>> >
>> > Q:
>> > "[Why was] set realstorage_management to OFF when that zPARM was
>> introduced in DB2 version 10?
>> >
>> > "Details
>> > "IBM z/OS implemented a Storage Management Design change after DB2 v10
>> was released.
>> > "•      Before the design change, DB2 used KEEPREAL(NO), virtual
>> storage pages were really (physically) freed, high CPU cost if YES
>> > DISCARDDATA KEEPREAL(NO), RSM has to get LPAR level serialization to
>> > manage those pages that are being freed immediately. That added to CPU
>> > usage and also caused some CPU spin at the LPAR level to get that
>> > serialization  -- excerpt from PTF
>> >
>> > "To get around/minimize the impact of the original design shortcomings
>> that was introduced by IBM RSM z/OS,  setting zPARM
>> realstorage_management to OFF, would probably have been prudent on most
>> LPARs.  HP/EDS tried to address this new issue IBM created.
>> >
>> > "IBM create two PTFs and changed the way DB2 and RSM manages the page
>> frames.
>> >
>> > "•      After a design change (now) DB2 uses KEEPREAL(YES), storage is
>> only virtually freed
>> > "If DB2 doesn't tell RSM that it doesn't need a frame, then the frame
>> > will remain backed in real storage in some form. That causes the
>> > growth of real storage and paging and everything that goes with using
>> > up REAL storage. KEEPREAL(YES) allows DB2 to tell RSM that z/OS can
>> > steal the page if it needs it, but DB2 retains ownership of the page,
>> > and it remains backed with real storage. If z/OS needs the page, it
>> > can steal it -- excerpt from PTF
>> >
>> > "V10 APAR PM88804 APAR PM86862 and PM99575"
>> >
>> > So...perhaps check your DSNZPARM and make sure it's coded
>> appropriately for more modern times.  FYI, we are z/OS 2.2 and DB2 11.1,
>> NFM.  We are in the process of rolling out REALSTORAGE_MANAGEMENT=AUTO
>> (the current IBM recommended setting) across our enterprise.
>> >
>> > HTH,
>> > Art Gutowski (with assist from Doug Drain, Steve Laufer and IBM)
>> > General Motors, LLC
>> >
>> > ----------------------------------------------------------------------
>> > For IBM-MAIN subscribe / signoff / archive access instructions, send
>> > email to [email protected] with the message: INFO IBM-MAIN
>>
>>
>>
>> --
>> Mike A Schwab, Springfield IL USA
>> Where do Forest Rangers go to get away from it all?
>>
>>
>> ----------------------------------------------------------------------
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to [email protected] with the message: INFO IBM-MAIN
>>
>> ----------------------------------------------------------------------
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to [email protected] with the message: INFO IBM-MAIN
> ********************************************************
> For information, services and offers, please visit our web site: 
> http://www.klm.com. This e-mail and any attachment may contain confidential 
> and privileged material intended for the addressee only. If you are not the 
> addressee, you are notified that no part of the e-mail or any attachment may 
> be disclosed, copied or distributed, and that any other action related to 
> this e-mail or attachment is strictly prohibited, and may be unlawful. If you 
> have received this e-mail by error, please notify the sender immediately by 
> return e-mail, and delete this message.
>
> Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
> employees shall not be liable for the incorrect or incomplete transmission of 
> this e-mail or any attachments, nor responsible for any delay in receipt.
> Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
> Airlines) is registered in Amstelveen, The Netherlands, with registered 
> number 33014286
> ********************************************************
>
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to