glen herrmannsfeldt <[email protected]> writes:
> For unix/dos/windows file systems, though, there needs to be a disk
> cache where FBA blocks are brought into memory and the appropriate
> bytes copied to user space. Now, the caching ability of that likely
> helps much of the time, but it isn't so different from what CKD
> emulation has to do.

re:
http://www.garlic.com/~lynn/2012m.html#2 Blades versus z was Re: Turn Off 
Another Light - Univ. of Tennessee
http://www.garlic.com/~lynn/2012m.html#3 Blades versus z was Re: Turn Off 
Another Light - Univ. of Tennessee

but major server apps & rdbms on those platforms do direct disk i/o w/o
system caching (and/or rdbms do their own caching) .... they also do
direct i/o for network lan. the application direct disk i/o was part of
what drove POSIX asynch I/O specifications in the late 80s.

problem is since there haven't been industry standard benchmarks
published for mainframe ... it takes some digging trying to uncover
apples-to-apples.

FICON architecture layered half-duplex channel paradigm on top of
industry standard fibre channel. As a result, it needed to keep track
of "open exchanges" ... here it increases max from 60 to 600
ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/zsw03059usen/ZSW03059USEN.PDF

above also has z10 ficon express4 31,000 maximum 4kbyte I/Os per
second or 124MBYTE/sec

base fibre channel didn't need that overhead ... just send out i/o
program to controller/device for execution.

this has zhpf at 92,000 4k channel i/os per second and 368MBYTE/sec
increases to 1600MBYTE/sec with large sequential read/write mix
http://www-03.ibm.com/systems/z/hardware/connectivity/ficon_performance.html

this has z/os max. z196 with peak of 2million 4k I/O ops/second
ftp://public.dhe.ibm.com/common/ssi/ecm/en/zsw03169usen/ZSW03169USEN.PDF

from above: zHPF improves upon FICON by providing a Transport Control
Word (TCW) that facilities processing of an I/O request by the channel
and the contorl unit (and improves throughput compared to original FICON
CCW at a time processing). The TCW has capability that enables multiple
channel commands to be sent to the control unit as a single entity
instead of as separate comands as in FICON CCW (zHPF/TCW partially
implements the original underlying fibre channel design point from the
late 80s nearly 35yrs ago).

Figure 6 on pg9 shows how zHPF improves over FICON ... and starts to
approach the throughput of the native fibre channel implementation and
the original native fibre channel design point from 35yrs ago.

The 2M 4k I/O ops/second is max 80 processor z196 with (max) 14 system
assist processors (theoritically maximum is 2.2M SSCH running all 14
system assist processors at 100% utilization) and 104 FICON Express8
channels to 11 storage subsystems.

this claims for e5-2600, latest emulex delivers over one million IOPS
on a single channel and doubles the bandwidth of previous generation
http://www.emulex.com/artifacts/0c1f55d0-aec6-4c37-bc42-7765d5d7a70e/elx_wp_all_hba_romley.pdf

One question are the FICON Express8 channels actually 104 different
fibre-channel channels? ... aka the 2.2M ops are spread across 104
different fibre-channel channels ... while a single Emulex
fiber-channel channel is able to do 1M ops.

The previous reference has LSI storage subsystem capable of 724K IOPS
and peak 5Gbyte/sec while the Adaptec storage subsystem is 450K IOPS
and 6.6Gbyte/sec

The mainframe numbers appear that it can do peak of 92k IOPS on single
channel with zHPF (& TCW with some i/o program batching attempting to
approach base fibre channel paradigm) and maxed. out z196 is capable
of 2M IOPS with 14 dedicated system assist processors and 104 channels
(although theoritically 104*92 = 9.6M IOPS ... while each emulex
channel is capable of 1M IOPS).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to