Jim Mulder wrote:
The architecture scavenged two PTE bits
to allow for 64mbytes of real storage.  I don't think the 3033 ever
supported more than 32mbytes, and I am not sure about the 3081, but there
were customers running MVS/370 on the 3090 with 64mbytes of real storage.

re:
http://www.garlic.com/~lynn/2006m.html#27 Old Hashing Routine

in past posts i've told the story both ways ... both of the unused bits by architecture allowing PTEs to address up to 2**14 4k real pages (64mbyte) or one unused bit by 3033 to support 2**13 4k real pages. I remember lots of 32mbyte 3081s but don't remember any 64mbyte 3081s.

part of this is my scenario about dasd had declined in relative system performance by a factor of ten times over 10-15 yr period i.e. rest of the system resources/thruput increased by 40-50 times while dasd increased by only 4-5 times.

at least by the time i release the resource manager ... 11may76, you were starting to see real storage used to compensate for system thruput being constrained by dasd performance. i had done much of the work as an undergraduate in the 60s for cp67 ... which then got dropped in the morph from cp67 to vm370 ... but then was allowed to rerelease it as the resource manager
http://www.garlic.com/~lynn/subtopic.html#fairshare
http://www.garlic.com/~lynn/subtopic.html#wsclock

i had a comparison of cp67 360/67 configuraiton and a vm370 3081 configuration ... and observed if overall system thruput had kept pace with the cpu, the typical number of users would have gone from 80 on 360/67 to over 2000 on 30831 ... but in fact, typical 3081 configurations tended to be more like 300-400 users ... which represented the change in dasd thruput between 360/67 and 3081.

this is the story about initially the san jose disk division assigned their performance group to refute the claims ... but they came back a couple weeks later and explained that i had actually understated the problem.

the other issue is that ckd dasd from the 60s ... traded off i/o thruput with extended (& multi-track) searches for real memory use ... i.e. more real memory intensive tended to cache indexes to specific disk location ... while vtoc & pds multi-track search spun the disk to find location. Some of the IMS ccw programs took this to the extreme with long ccw programs searching and finding fields in disk-based index structures ... which then were read and used for further searching ... all in a single channel program.

i've often repeated story about being called into a large national retailer with a large complex of vs2 processor complex ... basically loosely-coupled with processors dedicated to each region. they were experience sever performance degradation. turns out they had a large application program library shared across all processors in the complex with a three (3330) cylinder PDS index. aggregate number of I/Os to (across all processors in the complex) avg. about 6.5/sec ... because they were doing avg 1.5 cylinder search of the PDS index. the multi-track search of the first cylinder at 3600rpm/19cylinders was about .3secs elapsed time (i.e. being limited to approx three application program member loads per second aggregate across all the processors in the complex).
http://www.garlic.com/~lynn/subtopic.html#dasd

the other place that you saw real memory compensating for dasd performance was with the emergance of rdbms in the 80s. there were arguments between STL (60s physical database) and SJR ... original relational/sql database implementation
http://www.garlic.com/~lynn/subtopic.html#systemr

the 60s paradigm had direct record pointers imbedded as part of the database infrastructure and exposed to application programmers. this allowed going directly from one record to the next relatively efficiently. however, there was heavy people and skill resource overhead for management of the structure.

system/r abstracted away the direct pointers with the relational paradigm ... substituting internal index tree structure managed by the dbms (no longer requiring direct administrative and application support). this significantly decreased the people and skill resources needed to manage the infrastructure. however, it might take 5-6 disk reads of index structure to find the disk record pointer to the actual record needed. the other argument was that the on-disk index structure could double the physical disk space required for relational implementation vis-a-vis a "60s" physical dbms implementation.

what happened in the 80s was that disk space became increasingly less expensive ($$/mbyte which has now shifted $$/gbyte) and the explosion in real memory sizes allowed much of the relational index to be cached in real memory (eliminating a lot of the additional disk reads to get to the actual record).


various past posts discussing the STL/SJR argument over "60s" physical databases (with direct record pointers) and relational which created a dbms metaphor that eliminated the direct record pointers from the database abstraction: http://www.garlic.com/~lynn/2000c.html#13 Gif images: Database or filesystem? http://www.garlic.com/~lynn/2004e.html#23 Relational Model and Search Engines? http://www.garlic.com/~lynn/2004e.html#25 Relational Model and Search Engines? http://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies (w/o RM/SQL) http://www.garlic.com/~lynn/2004p.html#1 Relational vs network vs hierarchic databases http://www.garlic.com/~lynn/2004q.html#23 1GB Tables as Classes, or Tables as Types, and all that
http://www.garlic.com/~lynn/2005.html#25 Network databases


various past posts about the two unused bits in the 370 PTE and using them to increase the number of real pages that could be specified
(greater than 2**12 4k real pages past 16mbytes):
http://www.garlic.com/~lynn/2002c.html#40 using >=4GB of memory on a 32-bit processor
http://www.garlic.com/~lynn/2003d.html#26 Antiquity of Byte-Word addressing?
http://www.garlic.com/~lynn/2004.html#17 Holee shit!  30 years ago!
http://www.garlic.com/~lynn/2004e.html#41 Infiniband - practicalities for small clusters http://www.garlic.com/~lynn/2004n.html#50 Integer types for 128-bit addressing http://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries
http://www.garlic.com/~lynn/2006b.html#34 Multiple address spaces
http://www.garlic.com/~lynn/2006l.html#2 virtual memory


various posts describing the cp67 360/67 with vm370 3081 comparison and
the increasing disk system thruput constraint:
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door http://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts http://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros? http://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
http://www.garlic.com/~lynn/98.html#46 The god old days(???)
http://www.garlic.com/~lynn/99.html#4 IBM S/360
http://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
http://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
http://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
http://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
http://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
http://www.garlic.com/~lynn/2002.html#5 index searching
http://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
http://www.garlic.com/~lynn/2002b.html#20 index searching
http://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates? http://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates? http://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
http://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned
http://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
http://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
http://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
http://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to