I'm sure I don't count as "like Kathy"* :-) but... Contiguous Slot Allocation Algorithm is still in place so I also tout the "30%" but...
1) I say this is not a "falling off a cliff" thing but hazard it to be more gradual than that - so a dynamic. 2) I also suggest people aim for free paging space generally 1.5x the LPAR's memory. Item 2 is a ROT I made up myself. :-) It's motivated by the need to have something to dump into - and it leans in the same direction as the Cont Slot Alloc Algorithm. I'm not sure the 1.5x number is right... ... I consider both the 30% and my 1.5x as STARTING POINTS. And I emphasise good paging subsystem design and adequate memory provision - even now. So I'm really glad we're having this conversation. Martin Martin Packer, Mainframe Performance Consultant, zChampion Worldwide Banking Center of Excellence, IBM +44-7802-245-584 email: [email protected] Twitter / Facebook IDs: MartinPacker Blog: https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker * Kathy's the real expert to defer to From: Mark Zelden <[email protected]> To: [email protected], Date: 01/02/2012 15:48 Subject: Re: Very Lage Page Datasets (was ASM and HiperPAV) Sent by: IBM Mainframe Discussion List <[email protected]> On Tue, 31 Jan 2012 22:49:53 -0600, Barbara Nitz <[email protected]> wrote: >>Writing to contiguous slots and over allocation is mentioned, but unless I >>missed it the "old" ROT (and health check) of not having more than 30% >>of the slots allocated is not specifically addressed. Certainly with 4K >>pages (for the most part) and 3390-27 (or bigger) that 30% ROT doesn't >>apply anymore? 50% of a mod-27 is still a helava lot of free slots. > >I think it still applies. My understanding has always been that the 30% usage (after which paging effectiveness drastically drops) applies to the algorithm used on the in-storage control blocks to pick the next free slot in a page data set. Unless that algorithm was redesigned, 30% of 44.9GB per page dataset is what you should not exceed (just as the health check says) in AUX usage. Redesign of that is IMHO unlikely, just as using more than 2 IOs on a page data set simultaneously would require (an unlikely) redesign. > That sounds right as far as the algorithm, but I thought the paging effectiveness was related to likelihood of not being able to find contiguous slots for group page outs after the 30% usage (based on "old" technology). So if I have 5 3390-27 locals and they are all equally used at 50%, the algorithms (CPU usage, not I/O) are going to pick one of them, then do the page outs. That paging will find contiguous slots and should be efficient. BTW, this is just an example, we still try to keep our 3390-27 local usage at 30% just like we always did with smaller local page datasets in the past. I wonder what if any studies on this have been done in the lab. It would be nice if an IBM performance expert like Kathy Walsh could weigh in. Regards, Mark -- Mark Zelden - Zelden Consulting Services - z/OS, OS/390 and MVS mailto:[email protected] Mark's MVS Utilities: http://www.mzelden.com/mvsutil.html Systems Programming expert at http://expertanswercenter.techtarget.com/ ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN

