I guess that my SVCDUMP problem sort of triggered this threadlet. The main purpose for 'overflow' is when you have a 'proprietary' pool that is more or less dedicated to one business unit or function. In our case, SVCDUMPs need to be taken to a specific set of volumes so that our RYO dump management process can find and process them. Overflow appeared be a simple and convenient way to make sure that a verrrry large dump can be taken even though the designated pool cannot contain it. Unfortunately, OVERFLOW immediately sucked up all SVCDUMPs such that our RYO process never saw any of them. Worst possible 'solution'. We're now looking at larger SVCDUMP volumes even though 99% of the time extra space is unnecessary.
In the application arena, we have some big-gun folks who have historically been blessed with their own pools just to make sure that they will always get what they need--when they need it. Probably long overdue for a revision in policy, but big guns fire big bullets. OVERFLOW seems to resolve the problem pretty well because that can service multiple proprietary groups, and those folks generally don't care much about volser. . . J.O.Skip Robinson Southern California Edison Company Electric Dragon Team Paddler SHARE MVS Program Co-Manager 323-715-0595 Mobile 626-543-6132 Office ⇐=== NEW [email protected] -----Original Message----- From: IBM Mainframe Discussion List [mailto:[email protected]] On Behalf Of Cafiero, Tobias M. Sent: Monday, April 24, 2017 1:51 PM To: [email protected] Subject: (External):Re: Who Needs Spill/Overflow Pools anymore? Team, I agree with the below. There is usually some Testing that doesn't get announced or unexpected work load. It's better have extra around, then explaining why you have x amount of TB's available, but Applications abended on space. Tobias Cafiero Data Resource Management Core Systems Technology Lead System Architect DTCC New York Office: 212-855-1117 E-mail: [email protected] Web: www.dtcc.com <http://www.dtcc.com/ -----Original Message----- From: IBM Mainframe Discussion List [mailto:[email protected]] On Behalf Of Ronald Kristel Sent: Monday, April 24, 2017 3:43 PM To: [email protected] Subject: Re: Who Needs Spill/Overflow Pools anymore? What about 'in case of emergencies?'. (I think) almost all our storagegroups are directed to one substantial big overflow storagepool, _just_ for occurances where for whatever reason, something starts to heavily use a specific storage pool. I agree, it's not very efficient to have a massive amount of disks spinning without holding any data most of it's lifetime. However, this has classified more favorable then facing a time period with space abends. From my experience, I try to over-provision many of the storagepools with a certain amount, so that overflowing does not happen on a regular basis. And I monitor/analyse the growth of these storagepools. (And act on adding more 'MOD54's' when needed). I think it really depends on what is being stored aswell. Some data is not really eligible to be migrated if it's (for example) required for online processing. However, data that does not need to meet any criteria in terms of 'response time' can easily be migrated to save DASD. I was just wondering, are you using HSM ODM to migrate data? Ronald Kristel NL ________________________________ From: IBM Mainframe Discussion List <[email protected]> on behalf of Lizette Koehler <[email protected]> Sent: Monday, April 24, 2017 6:51:38 PM To: [email protected] Subject: Who Needs Spill/Overflow Pools anymore? This is just a discussion topic (Thank you Skip for making me think about this ;-O) In the past we needed to have them as we were tight on storage. But today, is that still the case. What is a good reason to either have or don't have SPILL/OVERFLOW pools when we can just add more storage to the pool. I can add a MOD54 to a pool but that mod54 is not full until I use up all of the allotted storage to the MOD54 in the storage array. So the storage array can be over provisioned until I need to go and get management to buy more physical storage. Do I lose anything by having datasets "spill" over to a different pool that may not have the same protection as the one it is in? HSM Backup, Dump, Cleanup processes? Or are there other considerations. Just asking a question or two. So basically, why use SPILL/OVERFLOW when you can just add dasd, or do lots of migration? We have the automation tool set up to monitor the pools and if they get too full, then start migration on the datasets in that pool. No manual intervention required. DFHSM and DFSMS do not think like humans when it comes to dataset management. So there a need to out maneuver them to make datasets go where we want. Lizette Koehler statistics: A precise and logical method for stating a half-truth inaccurately ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
