The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Todd Burch) writes:
> Going waaaaay back, look into the instigators for cross memory (AKA XA), and
> you'll find DB2's names at the top of the list.  

x-memory/dual-address for 3033 ... was Q&D solution to address exploding
size of common segment in larger installations.

way before DB2.

relational dbms was system/r all done on vm370 at san jose research in
bldg. 28 ... lots of past posts
http://www.garlic.com/~lynn/subtopic.html#systemr

there was system/r technology transfer from sjr to endicott for sql/ds
about the timeframe of 3081.

there was some amount of competition between the "60s database" in stl
and system/r in sjr. STL pointed out relational doubled the physical
database size (additional space needed by the indexes) and significantly
increased the physical disk accesses (mostly related to transversing the
indexes). SJR pointed that "60s databases" exposed direct pointers which
required a lot of system administrative overhead and increased the
application complexity. Going into the 80s, disk space became
significantly cheaper (mitigating the relational increase in disk space
requirements for indexes) and real storage sizes became significantly
larger (allowing relational indexes to be cached ... eliminating a lot
of the index physical disk reads). This allowed relational to move into
much broader market (decreasing hardware costs, increasing hardware
resources and needed much lower people skill and resources for database
care & feeding).

one of the people mentioned in this meeting
http://www.garlic.com/~lynn/95.html#13

claimed to have handled much of the technology transfer from endicott
back to stl/bldg90 for DB2.

for some other random topic drift ... old email when jim was leaving
fro tandem and foisting off consulting/contacts to me ... including
consulting to the IMS group:
http://www.garlic.com/~lynn/2007.html#email801006
http://www.garlic.com/~lynn/2007.html#email801016

recent posts discussing dual-address space (sort of subset of access
registers that would show up with xa):
http://www.garlic.com/~lynn/2008c.html#33 New Opcodes
http://www.garlic.com/~lynn/2008c.html#35 New Opcodes
http://www.garlic.com/~lynn/2008d.html#69 Regarding the virtual machines
http://www.garlic.com/~lynn/2008e.html#14 Kernels
http://www.garlic.com/~lynn/2008e.html#33 IBM Preview of z/OS V1.10

and mentioning that one of the main itanium architects is also
credited with dual-address space for 3033
http://www.garlic.com/~lynn/2008g.html#60 Different Implementations of VLIW

part of the issue was that the 370 product pipeline had gone
dry during the future system project period (which was going to
completely replace all 370). when FS got killed
http://www.garlic.com/~lynn/subtopic.html#futuresys

old post with some extracts from fergus/morris book discussing effects
of FS effort:
http://www.garlic.com/~lynn/2001f.html#33

there was a mad rush to get stuff back into the 370 product pipeline
... overlapped with getting XA moving ... which was going to take 7-8
yrs. Interim stop-gap was 303x. The integrated channgel microcode from
370/158 was repackaged as 303x "channel director". The 370/158 was
repackaged as a 3031 (w/o the integrated channel microcode) with a (2nd
158 microengine) channel director. The 370/168 was repackaged as 3032
(with 1-3 channel directors). The 3033 started out as 168 wiring diagram
mapped to faster chip technology.

there was "eagle" ... which wasn't relational.  The databases that would
have been consideration at the time of the XA architecture being
specified (i.e. referred to as "811" for the nov78 document dates) would
have been IMS and possibly some misc. stuff related to eagle.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to