Anne & Lynn Wheeler wrote:
as more environments changed from terminal emulation paradigm to
client/server paradigm ... you were starting to have server asymmetric
bandwidth requirements with individual (server) adapter card thruput
equivalent to aggregate lan thruput ... i.e. servers needed to have
thruput capacity equal to aggregate requirements of all clients.

re:
http://www.garlic.com/~lynn/2006l.html#35 Token-ring vs Ethernet - 10 years later http://www.garlic.com/~lynn/2006l.html#36 Token-ring vs Ethernet - 10 years later

the SAA drive was controlling feature/function as part of trying to maintain the terminal emulation paradigm and forestall transition to 2tier/client-server
http://www.garlic.com/~lynn/subnetwork.html#emulation

.... or what we were out doing, pitching 3-tier architecture and what was to become middleware (we had come up with 3-tier architecture and was out pitching it to customer executives and taken heat from the SAA forces)
http://www.garlic.com/~lynn/subnetwork.html#3tier

consistent with the SAA drive and attempts to maintain the terminal emulation paradigm was the low per card effective thruput and recommended configurations with 100-300 machines sharing the same 16mbit t/r lan (although effective aggregate bandwidth was less than 8mbit with typical configurations dividing that between 100-300 machines).

for some drift, the terminal emulation paradigm would have been happy to stick with the original coax cable runs ... but one of the reasons for the transition to t/r supporting terminal emulation paradigm was that there were large number of installations running in lb/sq-ft loading problems from the overloaded long cable tray runs (there had to be physical cable running from the machine room to each & every terminal).

of of this tended to fence off the mainframe from participating in the emerging new advanced feature/functions around client/server paradigm. enforcing the terminal emulation paradigm was resulting in the server feature/function being done outside of the datacenter and loads of datacenter corporate data leaking out to these servers.

this was what had prompted a senior person from the disk division to sneaking a presentation into the communication group's internal, annual world-wide conference, where he started out the presentation by stating that the communication group was going to be responsible for the demise of the mainframe disk division. recent reference
http://www.garlic.com/~lynn/2006l.html#4 Google Architecture

for some additional drift, we sporadically claim that the original SOA (service oriented architecture) implementation was the payment gateway.

we had come up with 3-tier architecture (and what was going to be called middleware) and had also done our ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp

we were later asked to consult with a small client/server startup that wanted to perform payment transactions on their server.

turns out that two of the people from this ha/cmp meeting
http://www.garlic.com/~lynn/95.html#13

were now at this small client/server startup and responsible for something that was being called the commerce server
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to