On 20 Jul 2007 23:35:18 -0700, in bit.listserv.ibm-main you wrote:

>----- Original Message ----- 
>From: "Timothy Sipples" <[EMAIL PROTECTED]>
>Newsgroups: bit.listserv.ibm-main
>To: <[email protected]>
>Sent: Friday, July 20, 2007 10:43 PM
>Subject: Re: PSI MIPS (was: Links to decent 'why the mainframe thrives'
>article)
>
>
>>Re: Supposed factor improvements over time in the integer performance of
>>processors, there are some faulty numbers in this discussion, or at least
>>misleading.  It has to do with cores versus chips.
>
>>Dean, with all due respect, no matter how much you try to fuzz it, they're
>>not directly comparable.  The X86 architecture going to two cores and now
>>quad cores was the only way X86 engineers could simulate a Moore's Law
>>improvement.  But the dirty little secret is that two cores most definitely
>>does not equal doubling the clock speed of a single core.  I think your
>>math is pretending otherwise, but that's not the real world of business
>>computing.
>
>I didn't do any math.  I reported SPECint numbers - for both single core and
>dual core.  The numbers I provided were single processor, single core
>results up to June 2006.
>
>>  Another dirty little secret is that today's typical X86
>>software is lousy at taking advantage of multi-cores.  And yet another
>>dirty little secret is that almost all software vendors charge more for
>>multi-core, so moving to the supposedly higher performance multi-core
>>design might actually raise your cost of computing.  (This is a very real
>>problem now.  Single core processors are still in demand, especially for
>>light duty test servers, development servers, branch servers, and
>>education/training servers, in order to minimize the cost of the software.)
>
>Why is this relevant to the discussion, except as a way to again move it
>away from the question of processor performance.  I understand the desire to
>defend the faith at all costs - but this is just a simple little issue.  If
>processor performance doesn't matter, then why is the fight to defend it so
>fierce?   Either mainframe CPUs are slower, or they are not.   Instead of
>all these 'dirty little secrets', and 'leading technology' arguments that
>have nothing at all to do with the issue, except to widen the discussion so
>it can be 'won', we either present the figures and deal with them, or just
>agree to disagree.

I suspect that slower or faster may depend on workload.  If there is a
lot of decimal arithmetic (native on z, simulated in part or all on
Intel, Power and RISC), the mainframe will be a lot faster for the
arithmetic part.  On the other hand, IBM came out with XPLINK and
other tweaks because C/C++ performance was so bad on z series.  I
would like to see a test with optimized COBOL and web server on the
various platforms.  Also a COBOL and DB2 benchmark across platforms
would be useful.  As someone who likes COBOL and z, I am dismayed by
the probable demise of both in part because of what I believe to be
bad management of both products.  
>
>>Oh, there's another dirty little secret.  Execution errors are becoming
>>more frequent as clock speeds increase, temperatures rise, and densities
>>shrink.  Keeping those electronics flowing in the right places is getting
>>tougher, and more often they're leaping out of their little cages resulting
>>in two plus two not equalling four.  This is most unacceptable in the
>>financial transaction processing world, for example, which is why IBM
>>mainframes protect against execution errors.  It's yet another metric
>>SPECint doesn't seem to report, the long-term processor error rate.
>
>This, of course, is a red herring.  We've already had Tom Marchant claim
>that IBM is leading in process technology, and now we are hearing that these
>improvements are causing increased errors that are unacceptable for
>mainframes.  This is typically called FUD.
>

Since a large percentage of PCs are sold with non-parity, non-ECC
memory, I doubt that anyone knows the true error rate of Intel
processors.  I wonder how many glitches attributed to Windows are
actually hardware problems.  It would be instructive to know how many
times a processing error is actually detected by the z hardware.
>
>>If you want to look at integer performance on a benchmark, stick to per
>>core numbers if you're comparing cores.  And you'll discover that processor
>>engineers are struggling to increase core speeds, and Moore's Law has
>>probably stalled already.  Maybe that's why Intel is cutting back on R&D?
>>:-)
>
>More FUD.  As I said - I compared single chip, single core performance.
>
>>This multi-core problem is not new to IBM.  The solutions (plural) require
>>a total system design perspective, including software.
>
>Such as Linux and open source.  Yep, only IBM has the answer.   The FUD gets
>more intense.
>
>I think John Gilmore is right - this thread has probably run its course.
>I'm a staunch mainframe advocate, but I think it's OK to give credit where
>it is due.  I haven't seen anything to rebut the notion that mainframe
>processors are slower than other architectures, and it doesn't seem like we
>are going to get there from here.
>
>
>>Re: Token-Ring and Ethernet, yes, really lousy analogy.  The progression in
>>networking technology mainly had to do with the emergence of network
>>switching, effectively obsoleting both Ethernet and Token-Ring.  It had
>>nothing in particular to do with Ethernet getting faster, because
>Token-Ring did, too (4, 16, 100 Mbps).
>
>Actually, according to the presenters it had *everything* to do with it.
>The point was that Ethernet was *cheap* and ubiquitous.  Engineers could
>ratchet up the speed of the Ethernet network to overcome the inefficiency
>cheaper than they could ratchet up Token Ring speeds, and customers didn't
>care about efficiency.
>
>The reason I brought it up was because of the efficiency argument - which
>is, again, not the real issue.  The real issue is the economics, and that
>was the point of the analogy.
>
>>And now we come full circle, because
>>guess what's inside even the latest System z9 mainframe?  Yes, PCI, albeit
>>far enhanced from the original.  You can buy CryptoExpress2 adapters, for
>>example, to go in those slots.
>
>I believe that the lesson there is that the economics determines the
>'winner' in the market.  Commodity components are finding their way into
>mainframes, because that is necessary in order to compete.  Those commodity
>components, because of the amount of money invested into the technology, end
>up faster than the specialty components they replace.  Not because they are
>better designed, but because the competition in the market they are used in
>is intense.  The point being that it isn't unusual for a 'cheap' commodity
>part to end up replacing the highly efficient, well engineered specialty
>part because it turns out to be both cheaper *and* faster.  And this is what
>my original argument boils down to - it has been suggested to me by people
>who are involved with these things that the volume parts, in particular x86,
>has and will inexorably overtake everything else in performance.   At the
>moment POWER is the exception, from what I understand - and Intel keeps
>throwing money at Itanium to keep it competitive, but who knows how long
>that will last.
>
>Anyway, thanks to all who provided links and data.  That made the thread
>useful to me, at least, even if it annoyed many others.
>
>Regards,
>   Dean
>
Clark Morris, semi-retired MVS person

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to