jwgli...@gmail.com (John Gilmore) writes: > Lynn Wheeler's numbers are arithmetically correct, but they are also > problematic. > > Mainframe channels perform multiple concurrent I/O operations that are > not adequately reflected in them.
my references have been IBM's published numbers for the z196 peak I/O benchmark ... which doesn't have any reference to how many concurrent I/O operations each channel is doing (i.e. even raising the issue could be somewhat considered misdirection) ... but just how many aggregate I/O operations peak z196 configuration can do. I actually have no idea whether IBM's published numbers are arithmetically correct ... they are just what the published IBM's numbers are. several times I have discussed the evoluation of bus&tag channels, lessons I learned from doing mainframe channel extender support in 1980, interactions with the group that would release ESCON (a decade later for es/9000 in 1990 ... by which time it is already obsolete), and being asked in 1988 to help LLNL standardize some serial technology ... which morphs into fibre-channel standard (and we had operational units in 1991). Later some POK channel engineers become involved in fibre-channel standard and define an extremely heavy-weight layer that drastically cuts the native FCS throughput ... which eventually ships as FICON The IBM z196 peak I/O benchmark lists 104 FICONs (heavy-weight protocol layer on top of 104 native FCS that drastically cuts the native FCS throughput) and 14 SAPs getting 2M IOPS. Peak SAPs throughput is published at 2.2M SSCH/secs all running at 100% busy ... but recommendation for SAPs is keep to peak 70% busy or 1.5M SSCH/secs. By comparison a recent native FCS has been announced for e5-2600 claiming over million IOPS (for single FCS) ... two such native FCS then would have higher aggregate throughput than the aggregate throughput of 104 FCS with FICON protocol layered ontop. posts mentioning FICON http://www.garlic.com/~lynn/submisc.html#ficon One of the issues that I learned doing the channel extender support in 1980 for the IBM Santa Teresa Lab (since renamed Silicon Valley lab, at the time, 300 people from the IMS group were being moved to offsite bldg) was effectively simulation of dual-simplex operation with channel programs downloaded (as data) to remote end of channel ... effectively channel programs&data were written continously on the outbound channel and data read continously on the inbound channel. There was no longer any concept of half-duplex channel executing channel program with lots of channel program protocol chatter back&forth latency making the resource busy. Separate dedicated path for outgoing and incoming data and throughput is purely the raw media throughput of the outgoing and incoming channels (there is no concept of channel busy separate from data being actually transmitted). Part of the issue with serial fibre-optic for fibre channel standard http://en.wikipedia.org/wiki/Fibre_Channel and similar work going on about the same time with scalable coherent interface (that I also got dragged into) http://en.wikipedia.org/wiki/Scalable_Coherent_Interface was to trying to eliminate the increasing end-to-end latency penalty in protocols. The objective was to move to protocols that just treated the media as raw continuous transport. -- virtualization experience starting Jan1968, online at home since Mar1970 ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN