[email protected] (John McKown) writes:
> Sounds a bit like a z/VM DCSS.

re:
http://www.garlic.com/~lynn/2014i.html#85 z/OS physical memory usage with 
multiple copies of same load module at different virtual addresses

an issue with DCSS (and each virtual address space not being able to
dynamically select virtual address) was that shared things had to be
given pre-assigned system-wide globally unique virtual address at the
time they were created. Out of 24-bit, 16mbyte virtual addresses, that
typically met that installation might have around 15mbytes to play
with. However as things were adapted to running shared, there became
issues with the total number of shared things exceeding 15mbytes and no
longer able to have pre-assigned system-wide globally unique virtual
addresses for everyting. So attempt was made to carefully choose
addresses that minimized conflicts for users that needed combinations of
different shared objects mapped concurrently. When that didn't work,
multiple different versions of the same thing were predefined in DCSS at
different virtual addresses. There was then some possibility that it
would satisfy community of users with requirements for different
combinations of shared objects. However, as sharing technology
increased, there was possibility that users had shared objects mapped,
but very little sharing going on, since they were using different
versions at different virtual addresses.

my original implementation allowed for any file object (w/o system
privileged DCSS predefinition, DCSS definition originally required
kernel build and reboot) to be shared (each virtual address space could
dynamically select its own virtual address for each shared object).

the DCSS problem was analogous to the MVS common system area issue
(except VM370 had fallback that user could still load non-shared version
at unique address). OS/360 heavy pointer passing API had major problem
moving from os/vs2 SVS to os/vs2 MVS with everything in its own virtual
address space. In order to access API parameters passed by pointer,
every application virtual address had 8mbyte image of the MVS kernal
mapped into the 16mbyte virtual address space. However, MVS subsystems
were now also in their own virtual address space ... the common segment
area (1mbyte) was created and mapped into every virtual address space
(leaving 7mbytes for application use) so that application could obtain
CSAa for stashing parameters and then calling the subsystem (with
pointer to parameters). As large installations increased number of
subsystems and concurrent applications, the common segment area size was
increasing and morphed into the common system area. By the end of the
3033 era, some large installations were being faced with increasing CSA
to larger than 6mbytes, leaving only 1mbyte (or zero) for application
use.

exceeding available virtual space came up in internal chip design that
had a 7mbyte fortran program running on numerous large mvs systems
... which were carefully crafted to keep CSA to 1mbyte. However,
enhancements were constantly threatening to increase the fortran program
to greater than 7mbytes. all these mega-MVS systems were being faced
with conversion to vm370/cms ... vm370/cms could be operated with all
but 128kbytes of 16mbytes for application execution.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to