The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Tom Schmidt) writes:
> Have the presenter review ancient history in the S/360 line -- the 360 line 
> generally supported differing page sizes (2K and 4K) and the 360/67 
> supported 2K, 4K and even 1M page sizes.  (I don't recall whether any SCP 
> shipped that dealt with 1M page sizes, especially in the VERY expensive 
> storage era of the S360 line though.  That could be why the idea lurked for 
> lo these many years.)  

360/67 was the only 360 that supported virtual memory (other than a
customer 370/40 with special hardware modifications that cambridge did
before it got a 360/67).  360/67 supported only 4k pages sizes and
1mbyte segments ... however 360/67 supported both 24-bit and 32-bit
virtual addressing

a copy of 360/67 functional characteristics at bitsavers
http://www.bitsavers.org/pdf/ibm/360/funcChar/A27-2719-0_360-67_funcChar.pdf

max. storage on 360/67 uniprocessor was 1mbyte real storage (and a lot
of 360/67 were installed with 512k or 768k real storage). out of that
you had to take fixed storage belonging to the kernel ... so there
would never be any 1mbyte real storage left over for virtual paging.

note that the 360/67 multiprocessor also had a channel director
... which had all sorts of capability ... including all processors in
a multiprocessor environment could address all i/o channels ... but
could still be partitioned into independently operating uniprocessors,
each with their dedicated channels. standard 360 multiprocessor only
allowed sharing of memory ... but a processor could only address their
own dedicated i/o channels. the settings of the channel director could
be "sensed" by settings in specific control registers (again see
360/67 functional characteristics).

equivalent capability allowing all processors to address all channels
(in multiprocessor environment) and supporting more than 24bit
addressing didn't show up again until 3081 and XA.

370 virtual memory had 2k and 4k page size option as well as 64k and
1mbyte segments.

vm370 used 4k pages size and 64k segments as default ... and supported
64k shared segments for cms.

however, when it was supporting guest operation systems with virtual
memory ... the vm370 "shadow tables" had to be whatever the guest
operating system was using (exactly mirror the guests' tables). dos/vs
and vs1 used 2k paging ... os/vs2 (svs & mvs) used 4k paging.

there was an interesting problem at some customers with doubling of
cache size going from 370/168-1 to 370/168-3. doubling the cache size,
the needed one more bit from the address to index cache line entries
and took the "2k" bit ... assuming that the machine was nominally for
os/vs2 use. however, there was some number of customers running vs1
under vm on 168s. these customers saw degradation in performance when
they upgraded from 168-1 to 168-3 with twice the cache size.

the problem was that the 168-3 ... every time there was a switch
between 2k page mode and 4k page mode ... it would completely flush
the cache ... and when in 2k page mode it would only used half the
cache (same as 168-1) ... and use all the cache in 4k page mode.
using only half the cache should have shown the same performance on
168-3 as on 168-1. however, the constant flushing of the cache,
whenever the vm moved back & forth between (vs1's shadow table) 2k
page mode and (standard vm) 4k page mode ... resulting in worse
performance with 168-3 than straight 168-1.

for a little drift ... a number of recent postings about comparing
performance/thruput of 768kbyte 360/67 running cp67 at cambridge
science center with a 1mbyte 360/67 running cp67 at the grenoble
science center. the machine at cambridge was running a global LRU
replacement algorithm that i had created and grenoble was running a
local LRU replacement algorithm from academic literature. Cambridge
running effectively twice workload and 104 4k "available" pages (after
fixed kernel requires from 768k machine) had better performance than
Grenoble's system (with 155 4k "available" pages after fixed kernel
requirements).
http://www.garlic.com/~lynn/2006e.html#7 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006e.html#37 The Pankian Metaphor
http://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s
http://www.garlic.com/~lynn/2006i.html#31 virtual memory
http://www.garlic.com/~lynn/2006i.html#36 virtual memory
http://www.garlic.com/~lynn/2006i.html#37 virtual memory
http://www.garlic.com/~lynn/2006i.html#42 virtual memory
http://www.garlic.com/~lynn/2006j.html#1 virtual memory
http://www.garlic.com/~lynn/2006j.html#17 virtual memory
http://www.garlic.com/~lynn/2006j.html#25 virtual memory
http://www.garlic.com/~lynn/2006l.html#14 virtual memory
http://www.garlic.com/~lynn/2006o.html#11 Article on Painted Post, NY
http://www.garlic.com/~lynn/2006q.html#19 virtual memory
http://www.garlic.com/~lynn/2006q.html#21 virtual memory

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to