On 07/14/2009 01:00 AM, IBMVM automatic digest system wrote:
Version 3 was the first CMS that could not be IPLled on the iron, I
think.  Someone should ask Lynn Wheeler.

re:
http://www.garlic.com/~lynn/2009j.html#67 DCSS
http://www.garlic.com/~lynn/2009j.html#68 DCSS addenda

CMS started out with 256kbyte (virtual) machine operation (on real 360/40).

original virtual machine system (at science center) was cp/40 done on a 
(256kbyte) 360/40 that
had special hardware modification to support virtual memory.

while cp/40 was being developed ... cms was also being developed ... running on 
the
"bare" 360/40 in non-virtual memory mode.

when the science center replaced the 360/40 with 360/67 (standard product, 
basically
360/65 with hardware modifications to support virtual memory) ... cp40 morphed 
into
cp67

when 3 people from the science center came out to install cp67 at the univ the 
last week
of jan68 ... all source was kept on cards on loaded into os/360 and assembled 
under os/360
producing physical text decks ... which were combined together in a card tray 
with
a modified BPS "loader" in the front. The physical cp67 deck was loaded into 
2540 card
reader and ipl. after BPS "loader" got the CP67 "txT" decks into memory ... it 
would
transfer to the last program ... CPINIT (in vm370 DMKCPI) ... which would write
the core image to specified disk location and write the IPL CCW sequence to the 
IPL disk.

Distribution was os/360tapes.

CMS would run in a 256kbyte virtual machine or on the "bare" hardware.

Part of the issue was both CP40 (and then CP67) and CMS were being developed in
parallel ... with the original source compile, etc ... all being done on os/360.

Sometime by summer 68, science center had moved to having source as CMS files
and assembling on CMS to produce "TXT" decks. Physical "TXT" were still being
kept in card tray and physically IPL to build new IPL'able kernel.

By that summer, I had done a lot of kernel CP67 pathlength work ... especially
targeted for OS/360 in CP67 virtual machine. Old post with part of presentation
I gave at the Aug68 SHARE meeting (held in Boston) ... lots of changes
were picked up by the science center for standard cp67 and shipped
http://wwww.garlic.com/~lynn/94.html# CP/67 & OS MFT14

I was also doing very carefully crafted OS/360 stage2 sysgens. I originally 
would get
the stage2 card deck output from the OS/360 stage1 assemble ... and reorder
all the statements to achieve carefully order of resulting generated system
file on disk (to optimize arm seek operation).

Later in 68, I looked at doing some pathlength enhancements for CMS environment
(as well as starting on dynamic adaptive resource management, new page 
replacement
algorithms, new scheduling algorithms and other stuff). Lots of CMS
operation was simplified (compared to OS/360) ... so major (cp67) pathlength
overhead was in CMS disk I/O channel program translation (CCW). I originally
defined a "new" CCW op-code that in single CCW specified all the parameters
for seek/search/tic/read/write operation ... drastically reducing the
channel program translation overhead. I also noticed that CMS didn't do
any multitasking ... just did serialized wait for the disk i/o to complete.
So I gave this new CCW op-code serialized semantics (i.e. it actually
returned to virtual SIO after the I/O had completed, with CC=1 CSW stored).

I got a lot of push back from the science center about having "violated"
virtual machine architecture (a CCW that wasn't defined in any hardware
manual). They explained that the appropriate way to violate the 360
principles of operation was with the "diagnose" instruction ... which
was defined as being "model" dependent implementation. The fiction was
then to define a virtual machine "hardware model" ... where the
operation of the diagnose instruction were according to virtual
machine (model) specification.

CMS was modified to use "a" diagnose instruction at startup to determine
whether it was running in virtual machine or (instruction "failed")
on real machine. If running in virtual machine, it would be setup
to use diagnose instruction for diak i/o (semantics about the
same as my special CCW) or SIO (& interrupts) for disk I/O.

In the initial translation to VM370 (release 1), CMS (cambridge monitor
system) was renamed to CMS (conversational monitor system) and
the test for running in virtual machine was removed as well
as the code to use SIO (& interrupts) for disk I/O ... eliminating
CMS's ability to run on bare hardware.

In cp67 there was a facility for saving "named" virtual memory
pages and ipl-by-name virtual memory pages. The NAME specifications
were part of a kernel module (renamed DMKSNT for VM370). In cp67,
the named tables specified the range of virtual pages to be saved
(and the disk location where they were to be saved). The "ipl-by-name"
would modify virtual memory tables to point to the specified disk
location (with RECOMP bit ... that the disk location was R/O to
the page replacement algorithm ... the page could be fetch
from that location ... but when it was to be replaced, it
had to go to a newly, recomputed disk location).

360/67 only had 1mbyte virtual segment sizes ... which
weren't useful for virtual memories that typically smaller
than 1mbyte ... so cp67 implemented "shared pages". The
named specification could optional specified certain
pages that were to be "shared" (as part of the
ipl-by-name). The first time ipl-by-name was invoked
for a named system, the "shared pages" were brought into
real storage and "fixed". For then on ... all other
ipl-by-names (for that system) would have their virtual
memory page table entries set to those (fixed) real
pages. For CMS, this originally was 3 pages. Protection
as achieved by fiddling with the os360 storage protect
keys (and not allowing cms to be dispatched with a
psw in "real" key zero).

As previously mentioned, CMS & saved/shared names systems
was reorganized for vm370 to take advantage of 370 64kbyte
segments (16 4k virtual pages) ... and originally the 370
segment protect facility. Unfortunately because
of 370/165 hardware schedules, 370 segment
protection was one of the things dropped out of
the announcement ... and vm370 had to retrofit
the cp67 key fiddling, storage protect  mechanism.

I did page mapped filesystem for cp67 ... eliminating
the need to have "named systems" ... and in morph to vm370
had a large set of feature/functions I referred to
as "virtual memory management" ... arbitrary virtual
memory pages could be mapped to filesystem page locations
along with support for arbitrary shared segment operation.
As mentioned, most of the these changes for CMS "shared operation"
were picked up (new portions of CMS code redone to run in
R/O shared/protected segments, CMS editor redone to run in
R/O shared/protected segment, etc) were picked up for vm370
release 3. However since all the page mapped filesystem
support was being picked up ... the stuff was remapped
to DMKSNT "saving" & "loading" using DCSS diagnose.

The full page mapped filesystem had a bunch of additional
capability ... in addition to significantly reducing
virtual machine simulation overhead (for file system
operations) ... even compared to the CMS diagnose I/O
implementation. It could also provide for asynchronous
execution overlapped with I/O ... w/o having asynchronous
support in CMS (by fiddling page invalid bits) ... this
was done dynamically based on load and configuration and
operations being performed.

In the early 80s, I did a project that took the
kernel spool file system and moved it into a virtual
address space ... with the implementation being
redone is vs/pascal. The objective was to make it
run at least ten times faster for all sorts of
operation ... and the capability and thruput leveraged
the paged mapped filesystem support.

One of the issues was that I had started the HSDT
project (high-speed data transport) ... some number
of past posts
http://www.garlic.com/~lynn/subnetwork.html#hsdt

with lots of T1 (full-duplex 1.5mbit, about 300mbyte/sec
aggregate) and faster links. nominal vnet/rscs was
using spool file interface ... which was synchronous
4k byte operations ... maybe 30-40 ops/sec if there
was no spool file contention ... possibly 4-5 ops/sec
on heavily loaded system (20kbytes/sec). For RSCS/VNET
with 9.6kbit links ... but I could easily need 3-4mbyte/sec
thruput (instead of 20kbyte-30kbyte/sec).

For other topic drift ... also started working with
NSF on T1 links for something that was going to become
the NSFNET backbone (the operation precursor to the modern
internet). Somewhere along the way, there was some internal
politics and were prevented from bidding on NSFNET backbone.
The head of NSF tried to help by writing a letter to the
corporation (copying the CEO) ... but that seemed to
just aggravate the internal politics (statements like
what we already had running was at least five yrs
ahead of all other bid submissions for building something
new). misc. old email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet
misc. past posts
http://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Reply via email to