[email protected] (John McKown) writes:
> ​Not exactly correct. OS/VS1 was a single large address space. That one
> address space was divided up into a _fixed_ number of _fixed sized_
> partitions (not regions). That is, if you had a step which required, say,
> 128M to run, you had to be sure it as in a partition which was at least
> 128M. The size of a partition was set by the sysprog or, IIRC, via an
> operator command. OS/VS2 release 1 was also called SVS (Single Virtual
> Storage). It has a single 24 bit addressable space which has a number of
> "regions" defined. Like in MVT, the size of a region was variable and
> basically it was "GETMAIN'd" when the job (or step - I forget) started. One
> problem that could exist in SVS was that a long running job might be
> GETMAINd while some smaller jobs were running. The long running job's
> storage would be a "sandbar" which could prevent other large jobs from
> running due to lack of contiguous storage. That why many shops would "shut
> down" batch in order to "start up" all the long running tasks, such as
> CICS, IMS, etc so that those STCs would not turn into storage sandbars.

SVS prototype was initially developed on 360/67 ... basically not too
different MVT running in 16mbyte virtual machine ... SVS built tables
for single 16mbyte virtual address space ... and a little bit to handle
very low rate paging. The largest amount of code was borrowing CCWTRANS
from CP/67 to hack into the side of EXCP for building shadow channel
programs.

CP/67 had operating systems running in virtual machines building channel
programs with virtual addresses ... which CP67 CCWTRANS had to build
"shadow" channel program (same as original but with real
addresses). OS/VS2 both SVS and MVS had same problem with MVT
applications building their own channel programs, but now addresses were
virtual ... and then executing EXCP/SVC0. EXCP was now faced with making
a copy of the applications channel programs that replaced the virtual
address with real addresses.

original justification for making all 370s virtual memory came from the
really bad storage management in MVT ... resulting in region sizes
typically having to be four times more storage that would be actually
used ... this severely restricted the number of regions that could be
defined on typical one megabyte 370/165. Moving MVT to virtual memory
(aka SVS) met that could get four times as many regions with doing
little or no paging (this was even w/o long running jobs which severely
aggrevating the storage management problem).

archived post with history from somebody in POK at the time, who was
involved in the decision
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

MVS also turned out to have horrible problrem with OS/360 API pointer
passing convention ... as a result it started out with an 8mbyte image
of the MVS kernel in each application 16mbyte virtual address space (so
when kernel code got the API pointer, it could directly access the
parameter fields in the application address area. However, MVT had a lot
of subsystems (outside the kernel) that needed to access application
parameters. For this they created the one mbyte common segment area
... storage that appeared in every application 16mbyte virtual address
space ... could allocate storage in the CSA for parameters that both the
application and a subsystem running in a different virtual address
space, could access (not max. application area was 7mbytes out of
16mbytes).

However, the size requirement for CSA is somewhat proportional to the
number of subsystems and activity ... by 3033 ...  CSA had become
"common system area" (rather than "common segment area") and large
installations was having problems restricted it to 5or6 mbytes (leaving
2-3mbytes out of 16mbyets for applications) and it was threatening to
grow to 8mbytes ... leaving zero bytes for applications.

After failure of FS (original os/vs2 MVS was suppose to be "glide path"
for the 370 replacement totally different from 370 ... see above
archived post), POK kicked of 3033 and 3081 (370/xa) in parallel. 370/xa
was to address a large number of MVS problems ... one was new hardware
mechanism for applications directly calling subsystems (w/o having to
execute kernel code) along with "access register" architecture that
provided ability for subsystems to access storage in different
application virtual address space.

However, the CSA/API problem was getting so bad in 3033 (before 370/xa),
that a subset of access registers was retrofitted to 3033 as "dual
address space" mode (person responsible left not long later for HP,
working on their snake/risc and then later was one of the primary
architects for Itanium, including a lot of enterprise integrity
features).

Endicott (low/mid range 370s) equivalent to 370xa was "e-architecture"
... since DOS/VS & VS1 had single virtual address space,
"e-architecture" where the virtual address space table was moved into
microcode and new instructions were added that update the virtual memory
table entries (now in microcode). Endicott equivalent for 3033 (after FS
failure) was ECPS microcode assist ... parts of kernel pathlengths were
moved into microcode where they ran ten times faster. I was told to
select the 6kbytes most executed kernel code ... archived post with
the result
http://www.garlic.com/~lynn/94.html#21

shows selected 6kbyte of kernel that accounts for 79.55% of kernel cpu.
Along with that they did VM370/VS1 handshaking ... the virtual machine
size and the VS1 virtual memory size were identical ... so all paging
was moved into VM370 ... and handshaking interface between VM370&VS1 so
VS1 could switch to a different application when there was page fault.
I had also heavily optimized VM370 paging algorithms and other
pathlengths ...  so any VS1 that normally had any amount of paging
... would run faster under VM370 (with ECPS) than on bare hardware.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to