[email protected] (DASDBILL2) writes:
> Since virtual storage is now so much less expensive and so much more
> available than storage [1] was 50 years ago, why not be
> really extravagant and use one whole byte per store?  If the byte
> contains 0, then the store number is not valid, or something like
> that, and if the byte contains anything other than 0, then the store
> number is valid.  This should result in much simpler code to access
> this table.
> Bill Fairchild 
> Nolensville, TN 
>
> [1] In those days, there was no virtual or real storage available on
> IBM's mainframes.  There was only "storage".


account of justification for moving MVT to virtual memory aka MVT had
significant efficiency issues with storage allocation, move to virtual
memory would allow running 16 concurrent tasks in 1mbyte machine with
little or no paging (MVT typically only used 25% of storage in region)
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

as aside, 360/67 was modified 360/65 with addition of hardware virtual
memory ... originally intended for tss/360 ... tss/360 never did reach
production quality ... so a lot of places would run them in straight
360/65 mode. However univ of michigan wrote their own virtual memory
operating system, MTS. Stanford also wrote their own virtual memory
system Orvyl (where Wylbur editor was originally implemented).

and of course the science center did virtual machine cp67 ... some
past posts mentioning cp67
http://www.garlic.com/~lynn/subtopic.html#545tech

actually the science center got a 360/40, made their own hardware
modifications to support virtual memory and did cp40 .... pending
availability of 360/67 (at which time, cp40 morphs into cp67).

even tho MVT move to virtual memory ... initially SVS (later morphs into
MVS), was planning on doing little actual paging ... their design of
page replacement algorithm was still important ... and I got into a big
argument that they had done some things very wrong. It wasn't until well
into MVS product cycle that somebody realized how wrong ... it turns out
that they were selecting non-changed, high-use, shared linkpack pages
for replacement ahead of selecting lower use, private, changed data
pages.

An example of horrible use of storage locality was the initial port of
APL\360 to CP67/CMS for CMS\APL. In the APL\360 environment, workspaces
were typically 16kbytes (or sometimes 32kbytes) and a complete workspace
was swapped as whole unit ... so locality of use played little
importance. However, CMS\APL had workspace was large as virtual memory.
The original APL\360 implementation would allocate a new storage
location for every assignment ... and when it completely exhausted
workspace ... it would do garbage collection and collect all data to
contiguous area of storage ... and then start again. Even a small APL
program could repeatedly touch every available workspace storage
location. In the APl\360 swapping environment this was hardly noticed.
In CMS\APL demand page environment with workspace as large as virtual
memory ... as "small" APL program would through the machine into
"page-thrashing". As part of the creation of CMS\APL ... the APL\360
storage management had to be redone to make it extremely virtual memory
friendly.

trivia ... CMS\APL with large-sized workspaces and also introduced an
API for accessing system services ... like file read/write ... allowed
use for real-world applications. One such was Armonk business planners
used it for modeling IBM business .... loading the most valuable of
corporate assets (all the customer details) on the science center cp67
and using it remotely from Armonk. This also required some amount of
security ... since the science center machine was also in use by
students&staff at various Cambridge/Boston educational institutions.

Being able to do real-world applications in CMS\APL gave rise to the
HONE system for world-wide sales&marketing support ... some past
posts mentioning HONE &/or APL
http://www.garlic.com/~lynn/subtopic.html#hone

the science center was also on the forefront of performance modeling,
performance monitoring, workload & system profiling, etc ... some of
which later evolves into things like capacity planning.

One of the tools started out using the REDCAP full instruction trace
program from POK to capture all instruction/storage fetch&store refs and
model it for things like locality of reference in virtual memory paged
enviornment. This was used as aid in moving apl\360 to cms\apl.  As it
got more sophisticated, it was used for semi-automated program
reorganization for improved operation in virtual memory environment.
Several of the large OS/360 application development groups also started
using it as part of their move from MVT real-storage to virtual memory
environment (also was used for simple "hot-spot" monitoring). This was
eventually released to customers as "VS/Repack" product.

A lot of the work that was done in the 60s and 70s for optimizing
throughput in virtual memory environment is now applicable for
optimizing throughput in processor cache enviornment.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to