vbc...@gmail.com (Vince Coen) writes:
> I think the stats on migration failures show that many fail regardless
> of the target migration mainly is that they over estimate project
> time, and quality of the target systems being used in place of m/f.
>
> Taking a straight view the mainframe is slow compared to running on
> servers on a instruction throughput basis.
>
> What they miss however is the data through put specs compared to
> mainframes where the m/f still wins hands down.
>
> I have tried (just for my self) to build a 8 core PC with separate
> Sata controllers for each 15000 rpm drive to match up with m/f
> performance but apart from the high costs of each controller there is
> still the speed or lack of it of going from the controllers to the
> application because of bottle necks in the data bus.
>
> I have not seen any PC/server design mobo that gets around this
> problem and until they do - the mainframe is still "the man"  for data
> processing in bulk.

Lots of migration failures are trying to make any change at all.

A simple scenario is the financial industry spent billions of dollars in
the 90s to move from "aging" overnight (mainframe) batch settlement to
straight-through processing using large numbers of parallel "killer
micros". A major source of failure was wide-spread use of industry
parallelization libraries (that had 100 times the overhead of cobal
batch). I pointed it out at the time, but was completely ignored ...
the toy demos looked so neat. It wasn't until they tried to deploy that
they ran into the scaleup problems (the 100 times parallelization
overhead total swamped the antificapated throughput increases using
large number of "killer micros" for straight-through processing). In the
meantime there has been enormous amount of work by the industry
(including IBM) on RDBMS parallizing efficiencies. A RDBMS-based
straight-through processing implementation done more recently easily
demonstation all of the original objectives from the 90s ... but the
financial industry claimed that it would be at least be another decade
before they were ready to try again (lots of executives still bore the
scars from the 90s failures and had become risk adverse).

In 2009, non-mainframe IBM was touting some of these RDBMS
parallelization scaleup efficienices. I somewhat ridiculed them
... "From The Annals of Release No Software Before Its Time" ... since I
had been working on it 20yrs earlier (and got shutdown, being told I was
not allowed to work on anything with more than four processors).

Also, in 1980 I got sucked into to do channel extender for STL that was
moving 300 people from the IMS group to off-site bldg. The channel
extender work did lots of optimization to eliminate the enormous channel
protocol chatter latency over the extended link ... resulting in no
appearant difference between local and remote operation. The vendor then
tried to get IBM approval for release of my support ... but there was
group in POK working on some serial stuff (and were afraid if it was in
the market, it would make releasing their stuff more difficult) and
managed to get approval blocked. Their stuff is final released a decade
later, when it is already obsolete (as ESCON with ES/9000). some past
posts
http://www.garlic.com/~lynn/submisc.html#channel.extender

In 1988, I was asked to help LLNL standardize some serial stuff they
have, which quickly morphs into fibre-channel standard (including lots
of stuff that I had done from 1980).  Later some of the POK engineers
define a heavy weight protocol for fibre-channel that drastically
reduces the native throughput which is eventually released as FICON.
some past posts
http://www.garlic.com/~lynn/submisc.html#ficon

The latest published numbers I have from IBM is peak I/O benchmark for
z196 that used 104 FICON (running over 104 fibre-channel) to get 2M
IOPS. At the same time there was a fibre-channel announced for e5-2600
blade that claimed over million IOPS (two such fibre-channel has greater
native throughput than 104 FICON running over 104 fibre-channel).

In addition, there hasn't been any real CKD manufactured for decades,
CKD is simulated on industry standard fixed-block disks. It is possible
to have high-performance server blades running native fibre-channel with
native fixed-block disks that eliminates the enormous FICON and CKD
simulation inefficiencies.

Related z196 I/O throughput number is all 14 SAPs running at 100% busy
peaks at 2.2M SSCH/sec ... however, they recommend that SAPs are limited
to 75% or 1.5M SSCH/sec.

I have yet to see equivalent numbers published for EC12 or z13. EC12
press has been that z196 @ 50BIPS processing to EC12 @ 75BIPS processing
(50% more processing) only claims 30% more I/O throughput. z13 quote has
been 30% more processing than EC12 (with 40% more processors than EC12).

Note that while fibre-channel wasn't originally designed for mainframe
... but for non-mainframe server configurations (that tend to run a few
thousand), SATA is design point for the $500-$800 PCs. The are a lot of
throughput differences between the consumer $500-$800 PCs and the
non-mainframe server blades that are done for more heavy duty
processing.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to