I don't agree that supporting GETMAIN between 2GiB and 4GiB would be elegant, although it might have some practical utility. I also suspect that it would cause more problems than it solved.
-- Shmuel (Seymour J.) Metz http://mason.gmu.edu/~smetz3 ________________________________________ From: IBM Mainframe Discussion List <IBM-MAIN@listserv.ua.edu> on behalf of Greg Price <greg.pr...@optusnet.com.au> Sent: Monday, May 7, 2018 1:32 PM To: IBM-MAIN@listserv.ua.edu Subject: Re: GETMAIN LOC=32 [Warning: long post. No world records, but feel free to skip it.] Paul, I think your request is unrealistic. I raise the following points - some of which have been mentioned or alluded to by others - for your consideration: - IMO, IBM will not perceive any ROI from your request sufficient to make them consider it. I'm not an insider, but I expect them to have ideas which they think are far more lucrative than yours to pursue. Such ideas probably include some which, although they may not have the elegance of yours from an application programmer p-o-v, are being requested by companies which pay IBM many more dollars than the likes of you or me. My conclusion: IBM will see the potential for incurring cost (at first from the initial development effort, and then on-going from the potential increase in PMRs where such a facility is used) without any obvious resultant potential increase in revenue. - Virtual storage below the 2GB bar is generally managed down to a doubleword granularity. Whether the macros used to make requests to get some of it or free some of it are called GETMAIN and FREEMAIN, or STORAGE, it is the same set on control blocks that are updated to keep account of it. When managing storage at the doubleword level, it becomes possible for a significant fraction of total storage consumed to be used to track all the storage consumed. - When scaling up storage to create the 64-bit address space size, managing storage at the doubleword atom size is just not a wise choice in terms of overhead. For this reason, virtual storage above the 2GB bar is managed in chunks of 1MB. My conclusion: Applications cannot get the conventional GETMAIN/FREEMAIN storage granularity natively in the 2GB-4GB address range. You would have to add some intermediate storage administration layer - which may not even be that difficult to do, as long as your 32-bit program "compiler" generated code to call it for storage management calls. - MVS private storage admin has "always" relied on user apps building storage usage from the bottom of the private area up (the "region"), while the system's use of private storage starts at the top and grows downward. When the two meet, private storage is exhausted and the job crashes. This process is occurring both below and above the 16MB line. - For the ATL or extended private area, the "top" is the underside of the 2GB line where important control blocks reside, possibly including page and segment tables. (This was true for XA, dunno if it is still true for z/OS, although what else is using all those megabytes reported by IEF032I (which used to be IEF374I) ?? ) My conclusion: Without a radical reengineering of the bottom-up-for-apps and top-down-for-system paradigm, ELSQA up to the 2GB bar is unmovable, and so the prospective 32-bit application will never be able to acquire a single 3GB chunk of storage entirely below virtual address 4GB. There were enough hassles flowing from latent bugs exposed by the VSM (GETMAIN/FREEMAIN if you prefer) logic change circa z/OS 1.9 (or 1.10?) without adding some sort of AM32 to the mix. That is why I think the PMR count could rise quite a bit giving a potential risk which is easy to avoid - simply by not making such a change. Lots of subtle assumptions about the nature of the behaviour of the OS lie lurking in application code that is numerous years old, I think. Sure, the bugs shouldn't be there, but why risk exposing them? Overall, while I too like elegant programming models, at the end of the day, IBM and other vendors have to support their customers, and on this platform an important part of that is compatibility. I certainly sympathise with the idea that there's an extra 2GB of storage for "existing" programs there for the taking, but in practice, I don't think it really is there in a z/OS environment. And this opinion is from a bloke who still thinks that if the System/360 CCW designer had not thought that a spare halfword would actually prove more useful than two separate spare bytes, then the high byte of the address word would have been available for XA to provide immediate AM31 support for I/O macros in DFP V2. But compatibility is important for vendors. I happen to know of an IBM product (not in the z/OS package but acquired from an ISV and runs on z/OS) which uses a routine with logic unchanged since 1967. I think z/OS has diverged too far from its MVS/370 predecessor where you could, perhaps, successfully implement your idea. And just to opine about another point, I will predict that we will not see AM64 support for QSAM/BSAM/BPAM I/O macros inside 10 years from now. (Gee, now I hope a DFSMS team are not currently working in this for the next release... :/ ) Cheers, Greg ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN