Steve Gentry <[EMAIL PROTECTED]> writes:
> I think the shared/saved/NSS/DCSS segments is a dead issue.
> We have also discussed the idea of using Virtual Disk and mapping a
> minidisk to extended storage. XC and all that entails.
> We still have to keep in mind how each of these methods would be
> implemented and how much the change would impact the program(s).
> We're trying to make minimal changes to existing programs. i.e., The logic
> for table processing is already in the programs.  Where the table resides,
> either above the line or below it are inconsequential.  Changing the
> program logic to use mdisks, be it v-disk or dataspaces requires a lot
> more program change.

the original superset of DCSS, I had started on CP/67 and referred to
it as virtual memory management (VMM) ... this included handling CMS
disk i/o as paged mapped operations (as opposed to emulated real i/o
with CCWs, locking & unlocking pages, etc) and various enhancements
for sharing pages. recent posting
http://www.garlic.com/~lynn/2006.html#10 How to restore VMFPLC dumped files on
z/VM V5.1

part of the idea for my vmm work came from having been exposed to some
tss/360 at the university ... and part from some of the multics stuff
going on in the 5th floor ... science center and cp67 stuff was on
the 4th floor
http://www.garlic.com/~lynn/subtopic.html#545tech

the original r/o cms shared pages done on cp/67 was at the page level
(not segment) and store protect was handled by fiddling the storage
keys. cms didn't use storage keys ... so virtual machine with shared
pages had the psw and storage keys fiddled; shared pages always had
storage keys set to zero; if the virtual machine attempted to set
storage key for shared page it was ignored, if the virtual machine
attempted to set zero storage key for non-shared page it was redone to
x'F'. if the virtual machine attempted to load a psw with a zero
protect key, it was reset to x'F'.

for 370 virtual memory, there was all sorts of new hardware stuff
defined, including r/o segment protect and selective invalidate
instructions.  for r/o segment protect, the segment table entry (in
the virtual address space table) for a specific segment could have the
r/o bit turned on.  the segment table entry then pointed to a set of
pages in a page table. the kernel could setup address space tables so
that some of the page tables were the same across multiple address
spaces (aka shared). however, whether or not a particular virtual
address space had read/write or read/only access to a shared page
table was set in the address space (segment) table (aka you could have
a mix of address spaces with r/o and r/w sharing of the same page
table).

for the cms morph to 370 ... the shared pages were reorganized into
segment and was planning on using the new 370 r/o segment protect
facility. however, before 370 virtual memory was announced an issue
arose with 370/165 hardware support for virtual memory. an
esacallation meeting in pok, the 165 engineers said that it would take
an extra six months to implement segment protect and selective
invalidates.  the vs2 system people said that they had no need for
segment protect and that their system would never do more than five
page i/os per second and they could do batch page steal once a second
for that many pages (so the different of doing a global PTLB once a
second or doing five individual IPTEs once a second was negligable).
It was only vm370 that came out for segment protect and high paging
rates. the resolution was to drop the additional features from the 370
announcement so that virtual memory could support six months earlier.

this left cms in a bind ... the mechanism they were planning on using
for protecting shared segments disappeared from the machines. they
were forced to punt and return to the cp/67 mechanism of protecting
individual shared pages.

not too long latter, virtual machine assist (VMA) microcode assist was
introduced ... which included support for loading new PSWs in the
hardware (rather than taking a privilege interrupt into the cp kernel
and emulating the instruction). this and other VMA features
represented thruput enhancement (by doing some of the virtual machine
stuff directly in hardware rather than having to interrupt into the cp
kernel for everything). for cms, the problem was that the VMA
microcode assist didn't no about the fiddling rules for protect keys
and so VMA couldn't be turned on for CMS virtual machines running with
shared pages.

so coming up to vm/370 release 3, there was some work done to allow
CMS virtual machines to run with VMA. basically since there was no way
of actually protecting the shared pages ... the paradigm changed to
allow shared pages to be modified ... but catching such modifications
before any but the virtual machine making the change saw it. basically
on every task switch ... dispatcher would check to see if it was going
to stop running a virtual machine with shared pages. it would then
scan that virtual machine's virtual pages for any shared pages that
had been modified. if the dispatcher found any, it would unshare the
shared system (for the virtual machine that made the modifications)
and update the shared tables to indicate that the recently modified
page(s) weren't in real memory and would have to be paged in from
disk.

at this time, normal cms had a single shared segment with 16 shared
pages. on the avg. running a cms workload with VMA saved X percent cpu
... and checking 16 shared pages on every task switch cost Y percen
cpu ... where Y was normally less than X ... yielding an overall
thrput improvement of X-Y.

however, for actual vm/370 release 3, it was decided to also pickup a
subset of my VMM changes and release them as DCSS. Now part of the
changes picked up was that I had modified some amount of additional
CMS code to make it run in shared segment (rewritten part of the
standard cms editor to make the code shareable as well as some amount
of other code). so what shipped in release 3 was the DCSS changes
where CMS now normally ran with 32 shared pages and with VMA turned
on. However, doubling the number of shared pages, then doubled the
avg. checking overhead to 2*Y ... and while nominally Y<X; 2*Y was
nominally greater than X.

this was even further aggrevated with shipping of multiprocessor
support. The checking gimmick was predicated on treating the shared
pages as private while a specific virtual machine was actually running
... and then catching and fixing any changes before switching to a
different virtual machine. with 2-way smp support, you could now have
two virtual machines running concurrently ... simultaneously accessing
the same shared pages (violating the priniciples of the gimmick).  so
to perpetuate the fixup gimmick, real processor specific shared
segments/pages were defined. now, in addition to checking whether the
previous virtual machine had modified any shared pages ... before you
went to run a new virtual machine, you had to check whether the new
virtual machine had its virtual memory table entries pointing to the
shared segments specific for the processor it was about to run on.
Now, not only was 2*Y>X (i.e. overhead greater than savings from using
VMA), but the processor specific page table pointer fiddling really
blew it out of the water.

the posting reference above
http://www.garlic.com/~lynn/2006.html#10 How to restore VMFPLC dumped files on
z/VM V5.1

i had made the full vmm changes available internally before vm370 release 3,
and they were running at places like HONE
http://www.garlic.com/~lynn/subtopic.html#hone

and was using it along with extensive use of the APL interpreter
running with something like 64 shared pages. the release 3 gimmick for
using VMA would mean that HONE was having to check nearly 100 shared
pages on every task switch. furthermore, since they workload was
heavily apl interpreter execution bound ... VMA assist provided them
negligible thruput improvement.

other random past posts about dcss, dmksnt, loadsys, etc:
http://www.garlic.com/~lynn/99.html#149 OS/360 (and descendents) VM system?
http://www.garlic.com/~lynn/2000.html#75 Mainframe operating systems
http://www.garlic.com/~lynn/2001c.html#2 Z/90, S/390, 370/ESA (slightly off
topic)
http://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
http://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the
machine word size ...)
http://www.garlic.com/~lynn/2002o.html#25 Early computer games
http://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long
post warning)
http://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With
>32 Bits of Text
http://www.garlic.com/~lynn/2003f.html#32 Alpha performance, why?
http://www.garlic.com/~lynn/2003g.html#27 SYSPROF and the 190 disk
http://www.garlic.com/~lynn/2003n.html#5 "Personal Computer" Re: Why haven't
the email bobmers been shut down
http://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring,
wandering story
http://www.garlic.com/~lynn/2003o.html#42 misc. dmksnt
http://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
http://www.garlic.com/~lynn/2004d.html#5 IBM 360 memory
http://www.garlic.com/~lynn/2004f.html#23 command line switches [Re: [REALLY
OT!] Overuse of symbolic
http://www.garlic.com/~lynn/2004l.html#6 Xah Lee's Unixism
http://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2004o.html#11 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2004p.html#8 vm/370 smp support and shared segment
protection hack
http://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
http://www.garlic.com/~lynn/2005b.html#8 Relocating application architecture
and compiler support
http://www.garlic.com/~lynn/2005g.html#30 Moving assembler programs above the
line
http://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
http://www.garlic.com/~lynn/2005t.html#39 FULIST


random past posts mentioning 165 problem with implementing
the full 370 virtual memory architecture:
http://www.garlic.com/~lynn/99.html#7 IBM S/360
http://www.garlic.com/~lynn/99.html#204 Core (word usage) was anti-equipment
etc
http://www.garlic.com/~lynn/2000f.html#35 Why IBM use 31 bit addressing not 32
bit?
http://www.garlic.com/~lynn/2001c.html#7 LINUS for S/390
http://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001l.html#53 mainframe question
http://www.garlic.com/~lynn/2002.html#31 index searching
http://www.garlic.com/~lynn/2002m.html#2 Handling variable page sizes?
http://www.garlic.com/~lynn/2002n.html#10 Coherent TLBs
http://www.garlic.com/~lynn/2002n.html#23 Tweaking old computers?
http://www.garlic.com/~lynn/2003g.html#19 Multiple layers of virtual address
translation
http://www.garlic.com/~lynn/2003g.html#20 price ov IBM virtual address box??
http://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle,
and other rambling folklore
http://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
http://www.garlic.com/~lynn/2005b.html#62 The mid-seventies SHARE survey
http://www.garlic.com/~lynn/2005f.html#1 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the
line
http://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
http://www.garlic.com/~lynn/2005r.html#51 winscape?

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Reply via email to