Re: URGENT! really low performance. A related question...
Lucius wrote Forgive me for being dumb here, but I'd like to ask How? If you're sharing a VM minidisk among several Linux guests, how can you update the contents without having all of the guests brought down? You start with 2 LPARs and 2 VMs each has a set of shared Linux disks at whatever level. Each can use the full capacity of the machine (Sharing engines, channels, OSAs, etc.). When you take one side down to upgrade it the other takes on the load. You can use one VM and 2 sets of Linux disks and share memory if constrained inthat manner, but then VM outages take you down. The outage time is essentially a failover time, which depends on what you do with the data. Being an IBM z guy, I would say that to maximize uptime the data belongs in a zOS data sharing sysplex, where the state and lock structures are kept in redundant coupling facilities, essentially eliminating the failover time there, but any failover mechanism can be used. Joe Temple [EMAIL PROTECTED] 845-435-6301 295/6301 cell 914-706-5211 home 845-338-8794 Lucius, Leland Leland.Lucius@ecTo: [EMAIL PROTECTED] olab.comcc: Sent by: Linux onSubject: Re: URGENT! really low performance. A related question... 390 Port [EMAIL PROTECTED] IST.EDU 02/19/2003 11:39 AM Please respond to Linux on 390 Port With disk sharing and VM, the apparent outage for maintenance of Linux can be virtually eliminated. Forgive me for being dumb here, but I'd like to ask How? If you're sharing a VM minidisk among several Linux guests, how can you update the contents without having all of the guests brought down? Thanks, Leland
Re: URGENT! really low performance. A related question...
I met a guy from Lebanon once. He told me that they installed one of their systems on the top floor of the building. Nobody would go up there to service the machine because in Lebanon the bombs hit the top floors first. The machine was moved to the basement. -Original Message- From: John Summerfield [mailto:[EMAIL PROTECTED]] Sent: Thursday, February 20, 2003 4:12 PM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... He never told me how the problem was cured. Maybe some more shielding? I seem to remember some customers who were advised to move their mainframe to lower in a tall building... the concrete was an effective barrier to the cosmic rays. Department of Social Security, Brisbane, Queensland had its mainframe in the basement. Came the Brisbane Flood, the basement was awash. With sewerage. Talk about a stink! -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question n...
I have been trying to resist posting to this thread, but you keep tempting me. I learned BASIC on an HP 2000E system (2100A computer 32KB of RAM, 5M hard drive). I learned Fortran on an HP DOS-M system (the same 2100A hardware). I learned Assembly language on a PDP-8E with 4KW (12 bits each). I learned COBOL on an IBM System 3 model 15D with a whopping 256KB of RAM. All in the same school year. Sure sounds like a lot now, but it was fun at the time. A couple of years latter I did some Assemble language programming on the System/3, and on an IBM System/7. Bet not many of you have ever heard of a System/7. I also did some Assembly language programming on the 2100A. Now that I think about it I guess it is logical that I like C so much, since I can do most of what I did in Assembly language in C. -Original Message- From: David Boyes [mailto:[EMAIL PROTECTED]] Sent: Thursday, February 20, 2003 5:05 PM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related questio n... Or a 4Kword PDP-8 (12 bits, and microprogrammable for side effects from every instruction!) . Or my (still functional) Altair with a whopping 512 bytes. -- db - Original Message - From: Ferguson, Neale [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, February 20, 2003 10:24 AM Subject: Re: URGENT! really low performance. A related questio n... 16K!? Luxury!! You must have had the Level 2. -Original Message- People who never had to write on a TRS-80 with 16K of memory NEVER had to learn how to cram EVERY last iota of efficiency into their code.
Re: URGENT! really low performance. A related question...
One cannot make such blanket statements. JAVA is a language, not a workload. Yes it does have characteristics that cause it to have long path lengths. However, it also has characteristics that trash caches, particularly if the programmer takes OO programming seriously. Very true, although observation indicates that there are a lot of really lazy programmers writing in Java, or ones that are simply ignorant of the effects of certain programming practices. There are also a lot of programmers that use the absolutely horrific crap that comes out of most of the integrated development environments these days without ever looking at the impact of the code on the environment. I would have thought that in the particular case of Java, it's the impact of the JVM on the cache that matters, and that few programmers know enough (or should know enough) to have much impact on cache. I added should because, with the advent of the next JVM, that knowledge would become obsolete and positively harmful. I note that new Xeons are about to descend on us: http://www.theregister.co.uk/content/61/29395.html Next up the Xeons. A uniprocessor Xeon 3GHz with a whopping 1MB cache will ship in Q3. Powers are getting faster, too: http://www.theregister.co.uk/content/3/28596.html The Power5 chip will be implemented in a 0.13 micron process, just like the Power4+ chip that was just announced in the pSeries 650 midrange server a month ago. Those Power4+ chips are now offered at 1.2GHz and 1.45GHz, and are expected to reach 1.7GHz or maybe even 1.8GHz and http://www.theregister.co.uk/content/61/29387.html Servers based on IBM's forthcoming Power5 chip will be four times faster than current Power4 machines. However, I couldn't find just what four times faster means. Forced use of 'lint' had it's moments -- at least it complained about the egregiously stupid stuff. Java just happens to be less efficient on all fronts than earlier languages, but then Fortran is less efficient than assembler. Interesting side note: Fortran is around 50 years old (+/- a few). It's gotten more intensive study by the compiler optimization wonks than any other language. Talk about geriatric research! 8-) -- db -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
Why can't IBM price their software cheaper as the hardware costs come down? We had a 3090-600S until 4 years ago. When that machine first came out, the list price was $11,000,000. A little over a year ago, we got an MP3000-H50, which the purchase price is less than $200,000. Both machines are very close in perforance. That's a 50 to 1 reduction in hardware cost. If IBM priced their software the same way, it would eventually become almost as popular as Windows. (Well, maybe not quite). There are two evils. IMO the greater one is the dependence of graduated charges on system size rather than application size. The second is IBM's obsession with the top end - whichhas caused them to keep their top 2000 customers but lose as many as 25,000 at the bottom end. I still occasionally hear the 35,000 VSE licenses thing. Internally, IBM claps itself on the back for reducing mainframe software charges 20% year on year. Despite many requests, I've never seen the model or the assumptions upon which this is based - but I suspect it's some sort of esoteric case using just z/OS and CICS/DB2 on a massive Sysplex - where it's true that the incremental cost of MSUs at the top end is MUCH lower than it was ten years ago. Doesn't help the 10 MIPS guy. Those that are left, anyway. Instead they face a continual squeeze - trying to keep within their existing system despite workload growth and path lengh changes caused by things like, e.g., LE under VSE. -- Phil Payne http://www.isham-research.com +44 7785 302 803 +49 173 6242039
Re: URGENT! really low performance. A related question...
Leland: One method is to duplicate the disk and apply your changes to the copy. When you are happy that what you have is what you want, change the User Directory entries of all the guests who use the shared disk to LINK to the new one or simply interchange the addresses of the two mindisisks in the owning user's directory entry. As the sharing Linux guests recycle over the next period of time, their use of the old disk will fade away until eventually it becomes available to hold the next version of its contents. The QUERY LINKS command will show you who is using which disk. This has the advantage of letting you customize it at your leisure and put new applications into production only when you're ready. Romney On Wed, 19 Feb 2003 10:39:25 -0600 Lucius, Leland said: With disk sharing and VM, the apparent outage for maintenance of Linux can be virtually eliminated. Forgive me for being dumb here, but I'd like to ask How? If you're sharing a VM minidisk among several Linux guests, how can you update the contents without having all of the guests brought down? Thanks, Leland
Re: URGENT! really low performance. A related question...
Actually, it's worse than Peter describes. The speed of light (or electricity) in a vacuum is 300,000 km/sec, but in silicon dioxide it drops to a little over 205,000 km/sec (index of refraction of silicon dioxide is between 1.46 and 1.48). So in a 1 Ghz processor, the electrical signal can only travel about 20 centimeters and in the 2 Ghz case about 10 centimeters. Dave Jones Sine Nomine Associates Houston - Original Message - From: Peter Stammbach [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, February 19, 2003 10:48 PM Subject: Re: URGENT! really low performance. A related question... (Sorry, I'm not subscribed to this list and have no clue if this reply appeares as I think it should) Joe Temple made some excellent points about CPU cycles and I just would like to add another thought which I think is extremely important (especially in in large SMP systems). It's the speed of electricity (or light). Electricity travels at less than 300'000KM/sec or 300'000'000meters/sec or 300'000'000'000mm/sec. So if you start to strip off zero's with kHZ, MHZ, GHZ, you find out, that an electric signal travels 30 centimeters in the time a 1 GHZ processor uses for one processing cycle (15 centimeters on a 2 GHZ processor). So packaging these processors as close as possible and packaging memory and other vital components as close as possible becomes much more important than the speed of CPU cycles. The IBM zSeries does an excellent job here with having 20 CPU's (16 for the OS, 3 for the I/O subsystem and 1 as a spare) on a 13 by 13cm substrate and all sharing the same L2 cache. Compare this to high end Unix systems with having their CPU boards METERS apart. So in my opinion, packaging is getting more important than GHZ because the speed of electricity (or light) seems to become the biggest enemy of performance improvements. This is certainly not true for single CPU bound tasks. But it is definitely true for most of commercial workloads. Kind regards, Peter Stammbach Mark Drvodelsky wrote But the question still does not appear to be answered - why does the mainframe have to run at such a low clock speed? The answer to your question has to do with how chip real estate is used. In a zSerires micro processor the primary usage of area is for large L1 caches and error detection/recovery hardware. Basically, increases in cache size result in decreases in clock rate. This is because there is more load on the critical signals. Secondly, to date the zSeries microprocessor pipleline does not do super scalar processing. That is it finishes 1 instruction per cyle at best. This is because it takes consideratbly more work and hardware to do mainframe style error recovery functions when more than 1 instruction can complete in a cyc;le. While super scalar execution does not help with clock speed it does help with cpu intense measurements like SPECint. However, since the cache is larger the zSeries will wait for memory less often than other machines.Metrics like SPECint and MHz ignore cache misses. So the question becomes how much are the caches missing? The more they miss the better the zSeries looks. This is very workload dependent. One driver of cache misses is context switches; another is I/O. If you attempt to make an Intel server very busy, the cache miss rate will climb, causing throughput to saturate, unless the work is very CPU intense and cache working set per transaction or per user is very small. The reason the Robert Nix's print server dabacle occured is that IBM made the mistake of treating Samba file/print as a single type of workload. We didn't understand at the time that a print server can behave like a network to network prototcol server. These servers actually move very little data through the cpu. Such a machine has very little context switching a nd the I/O is network to network which will actually drive very little data through the caches. The combination makes the workload cpu intense and if busy a bad candidate for Linux/z. By contrast a Samba file server can be doing enough disk to network I/O which pushes more data through the caches changing blocks to packets. This can cause distributed servers can get I/O and cache bound. Samba can be either CPU or I/O intense, and the single context makes the cpu intense workloads unattractive for z particularly if the machines are busy. So the answer to your question is that we could build a zSeries microprocessor which is as fast as any other processor, but to do so would cause us to lose the fundamental strengths in context switching, data caching and I/O. There is alwasy a trade off between speed and capacity. zSeries favors capacity; Intel favors speed. How much L1 cache should be given up to increase t he clock rate? How much RAS and recovery function should be given up to improve SPECint? We have seen this situation improve over time
Re: URGENT! really low performance. A related question...
Gamma seems odd, it doesn't interact much most times, now alpha emitters I could believe. Was it alpha or gamma emitters they got in their materials ? Alpha. In the substrates. -- Phil Payne http://www.isham-research.com +44 7785 302 803 +49 173 6242039
Re: URGENT! really low performance. A related question...
On Thu, 2003-02-20 at 01:00, John Alvord wrote: And Lord protect you if the packaging accidently contained materials which generated gamma rays. Another tale of woe from the IBM 1980s Gamma seems odd, it doesn't interact much most times, now alpha emitters I could believe. Was it alpha or gamma emitters they got in their materials ?
Re: URGENT! really low performance. A related question...
On Thu, 2003-02-20 at 14:14, Dave Jones wrote: Actually, it's worse than Peter describes. The speed of light (or electricity) in a vacuum is 300,000 km/sec, but in silicon dioxide it drops to a little over 205,000 km/sec (index of refraction of silicon dioxide is between 1.46 and 1.48). So in a 1 Ghz processor, the electrical signal can only travel about 20 centimeters and in the 2 Ghz case about 10 centimeters. This assumes you don't mind having multiple bits on the wire at the same time but different distances down it.
Re: URGENT! really low performance. A related question...
Alan Cox wrote Was it alpha or gamma emitters they got in their materials ? It was alpha particles and it was not unique to IBM. Basically it started with dynamic RAM. A passing particle drains charge from the memory cell causing soft errorsECC became mandatory for dynamic ram when the 64K bit chips were introduced because the charge held was small enough that the drainage changed the cell's state. The IBM 8130 was the first machine to ship with 64Kbit chips (yes that's K) and we had to scramble to retrofit ECC into the design. The 4381 shipped around the same time and had similar problems. As things got smaller static memory also started to be affected and we started to see ECC on caches as well as mainstorage. I don't know this for a fact but I suspect Sun's L2 cache problems were related to soft errors. One other wrinkle, IBM's use of flip chip did make the problem more pronounced because the active chip area was on the side closed to the substrate which is an emitter. The wire bond technique used by other vendors mitigated but did not eliminate the problem, because the emitted particiles had to get through the whole chip to hit the memory cells. However the soft error rate still indicated the use of ECC, particularly as memory got denser. Joe Temple [EMAIL PROTECTED] 845-435-6301 295/6301 cell 914-706-5211 home 845-338-8794 Alan Cox [EMAIL PROTECTED]To: [EMAIL PROTECTED] u.org.ukcc: Sent by: Linux onSubject: Re: URGENT! really low performance. A related question... 390 Port [EMAIL PROTECTED] IST.EDU 02/20/2003 10:23 AM Please respond to Linux on 390 Port On Thu, 2003-02-20 at 01:00, John Alvord wrote: And Lord protect you if the packaging accidently contained materials which generated gamma rays. Another tale of woe from the IBM 1980s Gamma seems odd, it doesn't interact much most times, now alpha emitters I could believe. Was it alpha or gamma emitters they got in their materials ?
Re: URGENT! really low performance. A related question...
On Thu, 20 Feb 2003 15:23:23 +, Alan Cox [EMAIL PROTECTED] wrote: On Thu, 2003-02-20 at 01:00, John Alvord wrote: And Lord protect you if the packaging accidently contained materials which generated gamma rays. Another tale of woe from the IBM 1980s Gamma seems odd, it doesn't interact much most times, now alpha emitters I could believe. Was it alpha or gamma emitters they got in their materials ? It has been 15 years since I talked to the researcher involved. He talked about some contamination from a granite purification byproduct... something that had small amounts of uranium in the material... It only occurred in one step of the process. It had been there all along but showed up as a problem as density increased. I Am Not A Scientist grin john
Re: URGENT! really low performance. A related question...
People who never had to write on a TRS-80 with 16K of memory NEVER had to learn how to cram EVERY last iota of efficiency into their code. |-+ | | Ryan Ware| | | RWare@INTERPLAST| | | IC.com | | | Sent by: Linux on| | | 390 Port | | | [EMAIL PROTECTED]| | | IST.EDU | | || | || | | 02/20/2003 09:16 | | | AM | | | Please respond to| | | Linux on 390 Port| | || |-+ --| | | | To: [EMAIL PROTECTED] | | cc: | | Subject: Re: URGENT! really low performance. A related question... | --| I think some of the performance strikes against java is that a lot of schools have standardized on it as their main teaching language with very little emphasis on data structures and lower level languages. Hence the students don't grasp how computers work so they write inefficient code. I think Moore's law in some ways propogated this, sadly. -Original Message- From: David Boyes [mailto:[EMAIL PROTECTED]] Sent: Wednesday, February 19, 2003 8:31 PM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... One cannot make such blanket statements. JAVA is a language, not a workload. Yes it does have characteristics that cause it to have long path lengths. However, it also has characteristics that trash caches, particularly if the programmer takes OO programming seriously. Very true, although observation indicates that there are a lot of really lazy programmers writing in Java, or ones that are simply ignorant of the effects of certain programming practices. There are also a lot of programmers that use the absolutely horrific crap that comes out of most of the integrated development environments these days without ever looking at the impact of the code on the environment. Forced use of 'lint' had it's moments -- at least it complained about the egregiously stupid stuff. Java just happens to be less efficient on all fronts than earlier languages, but then Fortran is less efficient than assembler. Interesting side note: Fortran is around 50 years old (+/- a few). It's gotten more intensive study by the compiler optimization wonks than any other language. Talk about geriatric research! 8-) -- db
Re: URGENT! really low performance. A related question...
On Thu, 2003-02-20 at 01:00, John Alvord wrote: And Lord protect you if the packaging accidently contained materials which generated gamma rays. Another tale of woe from the IBM 1980s Gamma seems odd, it doesn't interact much most times, now alpha emitters I could believe. Was it alpha or gamma emitters they got in their materials ? I recall back when we were getting round to 256K chips that cosmic rays were becoming a problem and that chips weren't going to be made much denser. What happened? -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
Or 360 COBOL to run in a 32K DOS (mainframe, not PC) partition. James Melin [EMAIL PROTECTED] To: [EMAIL PROTECTED] pin.mn.uscc: Sent by: Linux on Subject: Re: URGENT! really low performance. A related question... 390 Port [EMAIL PROTECTED] T.EDU 02/20/2003 09:22 AM Please respond to Linux on 390 Port People who never had to write on a TRS-80 with 16K of memory NEVER had to learn how to cram EVERY last iota of efficiency into their code.
Re: URGENT! really low performance. A related question...
People who never had to write on a TRS-80 with 16K of memory NEVER had to learn how to cram EVERY last iota of efficiency into their code. Try writing a WAP application. The display unit is the card. Cards can be packed into decks. Before transmission to the browser, decks are compiled into bytecode. The clever bit? Most mobile phones have quite small bytecode buffers. The dumb bit? You can't easily tell how much bytecode will be generated by a particular deck. The really nice bit? None of the handset manufacturers document their limits. The irritating bit? You can't tell what sort of handset you're talking to anyway. Typical buffers are around 5KB to 7KB. 1400 bytes in a Nokia 7110. Ever wonder why WAP hasn't caught on? -- Phil Payne http://www.isham-research.com +44 7785 302 803 +49 173 6242039
Re: URGENT! really low performance. A related question...
There is no relation between the cost of designing and building hardware to the cost of developing and maintaining software. Which is why IBM's gross margin on hardware is 31% and its gross margin on software is 87%. -- Phil Payne http://www.isham-research.com +44 7785 302 803 +49 173 6242039
Re: URGENT! really low performance. A related question...
Actually, it's worse than Peter describes. The speed of light (or electricity) in a vacuum is 300,000 km/sec, but in silicon dioxide it drops to a little over 205,000 km/sec (index of refraction of silicon dioxide is between 1.46 and 1.48). So in a 1 Ghz processor, the electrical signal can only travel about 20 centimeters and in the 2 Ghz case about 10 centimeters. Why do people keep referring to the speed of light? In what I learned of electronics, signals are carried by electrons travelling round in conductors (and semiconductors). AFAIK electrons are quite a bit slower than photons. -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
I think that chip makers found a way to get purer materials to make the substrates. I am not really up on this, but I think they refine the silicon with a process that removes more of the alpha emitters. -Original Message- From: John Summerfield [mailto:[EMAIL PROTECTED]] Sent: Thursday, February 20, 2003 7:50 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... On Thu, 2003-02-20 at 01:00, John Alvord wrote: And Lord protect you if the packaging accidently contained materials which generated gamma rays. Another tale of woe from the IBM 1980s Gamma seems odd, it doesn't interact much most times, now alpha emitters I could believe. Was it alpha or gamma emitters they got in their materials ? I recall back when we were getting round to 256K chips that cosmic rays were becoming a problem and that chips weren't going to be made much denser. What happened? -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
On Thu, 20 Feb 2003 23:49:38 +0800, John Summerfield [EMAIL PROTECTED] wrote: On Thu, 2003-02-20 at 01:00, John Alvord wrote: And Lord protect you if the packaging accidently contained materials which generated gamma rays. Another tale of woe from the IBM 1980s Gamma seems odd, it doesn't interact much most times, now alpha emitters I could believe. Was it alpha or gamma emitters they got in their materials ? I recall back when we were getting round to 256K chips that cosmic rays were becoming a problem and that chips weren't going to be made much denser. What happened? The cosmic ray scientist I talked with at Research in the middle 1980s said they spotted a pattern on No Trouble Found on channel Cache memory. The frequency was doubled in Denver - which has twice the number of cosmic ray bursts compared to sea level. Eventually IBM set up a several month long trial in a high altitude ghost town. The 308X was set up with some PC controllers which monitored for these transient conditions. At the same time, they arranged to get records of cosmic ray bursts at a (New Mexico?) radio observatory. The occurance of transient channel cache memory matched the radio observatory bursts quite closely. He never told me how the problem was cured. Maybe some more shielding? I seem to remember some customers who were advised to move their mainframe to lower in a tall building... the concrete was an effective barrier to the cosmic rays. john alvord
Re: URGENT! really low performance. A related question...
John Summerfield wrote Why do people keep referring to the speed of light? In what I learned of electronics, signals are carried by electrons travelling round in conductors (and semiconductors). AFAIK electrons are quite a bit slower than photons. Well, according to a guy named James Clerk Maxwell light is electromagnetic radiation. Electrical current (the flow of charge) in a conductor is induced by an electromagntic wave. At low frequency and short distance the idea of voltage and current works fine and the wave is ignored. As things get faster or longer (ie POwer Lines) the conductors need to be treated as wave guides more than as conductors. As far as the math goes, I believe that you have to start using transmission line characteristics as soon as the delay on the line matters. The conductors in a modern chip are not treated as simple wires with no impedance or delay but are modeled with inductance, capacitance, and resistance, in much the same way that a transmission line is modeled. An upper bound for electromagnetic wave speed is C, unless you get into some really hairy quantum physics paradoxes. (Read Shroedingers Kittens, I forget the author's name) On the other hand practical physical limitations slows waves down. How close to C you get depends on the medium and practical things like the need to dampen reflections on the line. That is the fastest line is useless if the signal on it rings enough to prevent further use of the line. Wow I thought I had forgotten all that stuff 30 years ago when I burned my fields and waves book... Joe Temple [EMAIL PROTECTED] 845-435-6301 295/6301 cell 914-706-5211 home 845-338-8794
Re: URGENT! really low performance. A related question...
With disk sharing and VM, the apparent outage for maintenance of Linux can be virtually eliminated. Forgive me for being dumb here, but I'd like to ask How? If you're sharing a VM minidisk among several Linux guests, how can you update the contents without having all of the guests brought down? Thanks, Leland
Re: URGENT! really low performance. A related question...
There is no relation between the cost of designing and building hardware to the cost of developing and maintaining software. -Original Message- From: Eric Bielefeld [mailto:[EMAIL PROTECTED]] Sent: Tuesday, February 18, 2003 12:33 PM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... Bill, You make some good points. Phil Payne regularly posted on this issue on IBM-Main. If you look at the cost of hardware and how much it has gone down, and compare that to the price of the software, something is rotten. IBM isn't going to lose big insurance companies, big banks, and other large corporations, however not many new customers when looking at the price of z/OS are going to buy mainframes and put z/OS on them. They did a good thing with z/OS lite - if you don't run Cobol or CICS and maybe a few other things, it costs about 10% of the regular z/OS. Why can't IBM price their software cheaper as the hardware costs come down? We had a 3090-600S until 4 years ago. When that machine first came out, the list price was $11,000,000. A little over a year ago, we got an MP3000-H50, which the purchase price is less than $200,000. Both machines are very close in perforance. That's a 50 to 1 reduction in hardware cost. If IBM priced their software the same way, it would eventually become almost as popular as Windows. (Well, maybe not quite). Eric Bielefeld Sr. MVS Systems Programmer PH Mining Equipment Milwaukee, WI 414-671-7849 [EMAIL PROTECTED] [EMAIL PROTECTED] 02/18/03 01:38PM I have more of a problem justifying migrating a existing group 38 system to an entry level group 38 z800 or to a group 80 entry level system on a z900 when my management compares the software licensing costs from various vendors we use to process what is essentially a static workload. Every time a new mainframe hardware platform is announced the entry level group is higher in performance and associated software costs than the previous generation. How many small to medium mainframe shops did IBM loose because of the zSeries software pricing differences? What about third party vendors? How many of them have lost clients because of tiered pricing? Sure, zVM is lower in cost on zSeries and Linux is virtually free but what about those shops running CA or other vendor products looking at a two or more tier jump in pricing to process the same workload on a new machine? Why not say the entry level is the lowest processor model and make it a group 10 no matter what the mip rating and leave the software pricing alone? How many shops would keep or buy new mainframes if you only had to pay group 10 pricing for what is now a group 38 box? How many shops would look for new workloads to migrate to the mainframe to utilize the spare horsepower? The idea is to grow the market not stunt it with sort term profits. An investment in any mainframe is for long term processing requirements. Those mainframe clients want to stay around and not have the data center viewed as a purveyor of the platform du jour or fad pushers. My $0.02USD... Bill Stermer ACS - City of Anaheim + This electronic mail transmission contains information from P H Mining Equipment which is confidential, and is intended only for the use of the proper addressee. If you are not the intended recipient, please notify us immediately at the return address on this transmission, or by telephone at (414) 671-4400, and delete this message and any attachments from your system. Unauthorized use, copying, disclosing, distributing, or taking any action in reliance on the contents of this transmission is strictly prohibited and may be unlawful. +
Re: URGENT! really low performance. A related question...
Actually, the flow of electrons in a semiconductor doesn't really carry the signal. it caries the current, to be sure, but the signal is carried by the electic field of the electron-hole pairs in the semiconductor. The changing fields propagate at whatever the speed of light is inside the semiconductor, and as Mr. Summerfeild says, limited by waveguide effects. The actual speed of flow of the electrons is called the drift velocity and is quite slow, measured in centimeters per second depending on the semiconductor and the temperature. And I agree with you, John. It's been entirely too many decades since I did my Master's in Solid State Physics. I'm surprised I remember all this stuff. A friend will help you move. A really good friend will help you move the body. Gordon W. Wolfe, Ph.D, (425) 865 - 5940 VM Technical Services, the Boeing Company -Original Message- From: Joseph Temple [mailto:[EMAIL PROTECTED]] Sent: Thursday, February 20, 2003 9:48 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... John Summerfield wrote Why do people keep referring to the speed of light? In what I learned of electronics, signals are carried by electrons travelling round in conductors (and semiconductors). AFAIK electrons are quite a bit slower than photons. Well, according to a guy named James Clerk Maxwell light is electromagnetic radiation. Electrical current (the flow of charge) in a conductor is induced by an electromagntic wave. At low frequency and short distance the idea of voltage and current works fine and the wave is ignored. As things get faster or longer (ie POwer Lines) the conductors need to be treated as wave guides more than as conductors. As far as the math goes, I believe that you have to start using transmission line characteristics as soon as the delay on the line matters. The conductors in a modern chip are not treated as simple wires with no impedance or delay but are modeled with inductance, capacitance, and resistance, in much the same way that a transmission line is modeled. An upper bound for electromagnetic wave speed is C, unless you get into some really hairy quantum physics paradoxes. (Read Shroedingers Kittens, I forget the author's name) On the other hand practical physical limitations slows waves down. How close to C you get depends on the medium and practical things like the need to dampen reflections on the line. That is the fastest line is useless if the signal on it rings enough to prevent further use of the line. Wow I thought I had forgotten all that stuff 30 years ago when I burned my fields and waves book... Joe Temple [EMAIL PROTECTED] 845-435-6301 295/6301 cell 914-706-5211 home 845-338-8794
Re: URGENT! really low performance. A related question...
I don't really want to defend IBM pricing, but gross margin is not the same as profit. It is much more expensive per customer to maintain software that has a small installed base than software that has a large installed base. The number of bugs found is only slightly less for a small installed base. -Original Message- From: Phil Payne [mailto:[EMAIL PROTECTED]] Sent: Thursday, February 20, 2003 8:47 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... There is no relation between the cost of designing and building hardware to the cost of developing and maintaining software. Which is why IBM's gross margin on hardware is 31% and its gross margin on software is 87%. -- Phil Payne http://www.isham-research.com +44 7785 302 803 +49 173 6242039
Re: URGENT! really low performance. A related question...
Phil Payne wrote: Doesn't help the 10 MIPS guy. Those that are left, anyway. Instead they face a continual squeeze - trying to keep within their existing system despite workload growth and path lengh changes caused by things like, e.g., LE under VSE. Agreed. Plus each new release of the operating system(s) require(s) +10% more cpu just to process the same workload as the previous release. Will you get a price break for the extra cycles needed to process the same work load? I have not seen one yet! So just by simply staying current you will eventually need to jump up to the next tier level in hardware/software. You gain nothing but spend more for the privilege. Commercial software vendors generate revenue by causing sites to needlessly grow simply to stay current no matter what platform you choose. Until entry level hardware and entry level software pricing truly mean entry level no matter how big a box or how much potential work you will be able to do the cost of doing IT business will increase until projects like open source begin to greatly impact the commercial vendor's bottom line and they have to reduce pricing to save their business. Bill Stermer ACS - City of Anaheim
Re: URGENT! really low performance. A related question...
Alan Fargusson wrote: I don't really want to defend IBM pricing, but gross margin is not the same as profit. It is much more expensive per customer to maintain software that has a small installed base than software that has a large installed base. The number of bugs found is only slightly less for a small installed base. Diminishing returns in the installed user base can also be shown to come from those who abandon the platform due to spiraling expenses for the same workload. Do you want to sell a single $1 million dollar glass of lemonade or 1 million glasses of lemonade at $1.00 each? Initial sales are not where the market should be focused but on repeat customers investing in the long term. Great, you have had a technology breakthrough that increases throughput and reduces cpu consumption but it only comes as a 1000 mip box and by the way your current 10 mip system will no longer be support in one year so pay up. (Greatly exaggerated I agree) Now how are you going to keep the installed base that is processing on machines less than 1000 mips if you require them to jump all those tiers just to processes the same workload? Oh, and by the way, increasing workloads do not necessarily translate into greater profits when that increase is do to a change in technology perpetrated by the vendor. If you want to sell mainframes then the stigma of vendor induced rising overhead cost associated with a given static workload has to be banished from the pricing model. If the new release of an operating system or software product requires an increase in horsepower then give the customer a break since the vendor is requiring the change. Bill Stermer ACS - City of Anaheim
Re: URGENT! really low performance. A related question...
I am fairly sure that IBM analysis all these factors when deciding how much to charge for products. It seems to me that the market for Mainframe systems has increased slightly over the last few years, although I bet it has decreased this year with the large deficits most government agencies are having. I don't think we disagree. I am just pointing out some facts. -Original Message- From: Bill Stermer [mailto:[EMAIL PROTECTED]] Sent: Thursday, February 20, 2003 11:44 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... Alan Fargusson wrote: I don't really want to defend IBM pricing, but gross margin is not the same as profit. It is much more expensive per customer to maintain software that has a small installed base than software that has a large installed base. The number of bugs found is only slightly less for a small installed base. Diminishing returns in the installed user base can also be shown to come from those who abandon the platform due to spiraling expenses for the same workload. Do you want to sell a single $1 million dollar glass of lemonade or 1 million glasses of lemonade at $1.00 each? Initial sales are not where the market should be focused but on repeat customers investing in the long term. Great, you have had a technology breakthrough that increases throughput and reduces cpu consumption but it only comes as a 1000 mip box and by the way your current 10 mip system will no longer be support in one year so pay up. (Greatly exaggerated I agree) Now how are you going to keep the installed base that is processing on machines less than 1000 mips if you require them to jump all those tiers just to processes the same workload? Oh, and by the way, increasing workloads do not necessarily translate into greater profits when that increase is do to a change in technology perpetrated by the vendor. If you want to sell mainframes then the stigma of vendor induced rising overhead cost associated with a given static workload has to be banished from the pricing model. If the new release of an operating system or software product requires an increase in horsepower then give the customer a break since the vendor is requiring the change. Bill Stermer ACS - City of Anaheim
Re: URGENT! really low performance. A related question...
On Tuesday 18 February 2003 08:12 pm, John Summerfield wrote: I presume, from what you say, that Java isn't all that wonderful on zSeries? Improved CPU performance may make it so. It really depends on what you're doing with Java, and, as with other languages, how good your code is. David Boyes touched on this, mentioning Java code where the OO-ness of the language hid some really bad performance-related decisions from inexperienced coders. OO is a tool, not a panacea, and its presence does not relieve the programmer of the responsibility to actually understand what the application is *doing* on the system. But there is Java code that is not compute bound, just as there is in other languages. An application that has lots of threads, mostly waiting for inbound socket connections and then doing database queries or other I/O activities in servicing those connections, can still be a good candidate for Java on L/390, for example. The JIT compilers for Java have gotten much better in the past 18 months or so, as well, especially on the S/390 platform. I've seen some pretty darned good results on S/390 Linux with IBM's JDK 1.3 and later. Scott -- - Scott D. Courtney, Senior Engineer Sine Nomine Associates [EMAIL PROTECTED] http://www.sinenomine.net/
Re: URGENT! really low performance. A related question...
This thread seems to have drifted off the original topic and is no longer URGENT! Regards, Jim Linux S/390-zSeries Support, SEEL, IBM Silicon Valley Labs t/l 543-4021, 408-463-4021, [EMAIL PROTECTED] *** Grace Happens ***
Re: URGENT! really low performance. A related question...
Phil Payne wrote: Doesn't help the 10 MIPS guy. Those that are left, anyway. Instead they face a continual squeeze - trying to keep within their existing system despite workload growth and path lengh changes caused by things like, e.g., LE under VSE. Agreed. Plus each new release of the operating system(s) require(s) +10% more cpu just to process the same workload as the previous release. Will you get a price break for the extra cycles needed to process the same work load? I ha ve not seen one yet! So just by simply staying current you will eventually ne ed to jump up to the next tier level in hardware/software. You gain nothing b ut spend more for the privilege. Commercial software vendors generate revenue by causing sites to needlessly grow simply to stay current no matter what pl atform you choose. Until entry level hardware and entry level software pri cing truly mean entry level no matter how big a box or how much potential w ork you will be able to do the cost of doing IT business will increase until projects like open source begin to greatly impact the commercial vendor's bo ttom line and they have to reduce pricing to save their business. I don't think that's entirely fair. Take linux. We all use Linux, some of us have the good sense to use it on our desktops too. I used to use Red Hat Linux 3.0.3. I installed on a 486 with 170 Mbytes of disk, 8 Mbytes of RAM (the same system that used to run OS/2. Sort of). For years I had a webserver running RHL 4.2, also installed on an 8 Mbyte 486. I now run my office server on a Pentium II 233, 128 Mbytes of RAM (64 would probably suffice though), running (basically) RHL 7.2. Not so long ago (RHL 6.x) my wife was happily using a Pentium 133, 64 Mytes running KDE StarOffice 5.2. If I install RHL 7.x on a Pentium II with 128 Mbytes of RAM, performance is sluggish, and RHL 8.0 on such a box is pretty terrible. Who has driven this advance? Red Hat? Not really. While it tips buckets of money into Linux development (and Gnome in particular), to a large extent it (and other vendors) just pick up and package what's available. What's driving it, especially the desktop, is people who think it's currently not good enough, and who have the skills and desire to do something about it. And to give the results way for free. Since they get no money directly out of it, there is no incentive for them to push Linux to be ever more demanding on hardware. -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
ad. Do you want to sell a single $1 million dollar glass of lemonade or 1 mil lion glasses of lemonade at $1.00 each? One for a million. Overheads are lower. At my age, retirement would follow, so I wouldn't need repeat business;-) Anyone want one? -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
He never told me how the problem was cured. Maybe some more shielding? I seem to remember some customers who were advised to move their mainframe to lower in a tall building... the concrete was an effective barrier to the cosmic rays. Department of Social Security, Brisbane, Queensland had its mainframe in the basement. Came the Brisbane Flood, the basement was awash. With sewerage. Talk about a stink! -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
Came the Brisbane Flood, the basement was awash. With sewerage. Sewerage = the pipes, pumps and related infastructure Sewage = the smelly stuff I know the difference. Can't type though. :)) Mark Darvodelsky Data Centre - Mainframe Facilities Royal SunAlliance Australia Isn't it time you changed that line in your sig? Or are you holding out for when the bosses recognise the mistake? -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
So the answer to your question is that we could build a zSeries microprocessor which is as fast as any other processor, but to do so would cause us to lose the fundamental strengths in context switching, data caching and I/O. There is alwasy a trade off between speed and capacity. zSeries favors capacity; Intel favors speed. How much L1 cache should be given up to increase the clock rate? How much RAS and recovery function should be given up to improve SPECint? We have seen this situation improve over time, and IBM will continue to improve its microprocessor design, but zSeries cannot simply abandon strength in large working set workloads to crank up the clock speed and/or instruction rate for workoads with small working sets. This particularly true when the virtualization and workload management which drive consolidation and mixed workloads is dependent on the very hardware capabilities that would have to be given up. It would be interesting to come up with a model that made some sacrifices to improve CPU performance, not to replace existing systems, but to supplement them. I'm sure some folk would find the tradeoff attractive, particularly as it would be software-compatible. I presume, from what you say, that Java isn't all that wonderful on zSeries? Improved CPU performance may make it so. -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
John Summerfield wrote That tells me you weren't current on your maintenance... Software currency is an issue, but there is very good reason why people bring their systems down only once or twice a year for maintenance. They lose money when they do it. The balance between having the right fixes on and keeping the system up is an art. Of course it is also possible in a sysplex to put maintenance on without taking a system wide outage.I know of a bank that has a weekly maintenance cycle but has kept their sysplex up for more than 5 years. While sysplex is a Z/OS thing similar things can be done with LPAR, VM and some relatively simple failover scripts. There remain a few hardware and microcode updates which require that a box be taken down, but such maintenance is relatively rare and usually is not urgent. Security alerts for VM and Z/OS are practically nonexistent, and it is not necessary to take down VM to do maintenance on the Linux systems. With disk sharing and VM, the apparent outage for maintenance of Linux can be virtually eliminated. This is one of the key elements of TCO. Joe Temple [EMAIL PROTECTED] 845-435-6301 295/6301 cell 914-706-5211 home 845-338-8794 John Summerfield summer@computerdatasTo: [EMAIL PROTECTED] afe.com.au cc: Sent by: Linux on 390Subject: Re: URGENT! really low performance. A related question... Port [EMAIL PROTECTED] EDU 02/18/2003 07:59 PM Please respond to Linux on 390 Port I just IPL'ed the S/390 Sunday 2/9/03 it was up since we installed our new MP3000 1/9/02 that's January 9, 2002. I IPLed to install Z/VM 4.3.0 (Scheduled Change) That tells me you weren't current with your maintenance;-) If you looked at the security advisories and decided they were not needed, that's fine. However, I suspect that many people who report how long *their* systems have been up have neglected their maintenance. -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
ECC wasn't pervasive in mainframes also. I remember hearing of a problem with a 3081 which turned out to be cosmic rays (really) which occaisionally changed bits in a channel buffer cache... which had neither parity nor ECC. The first machine I know of that detected single bit errors throughout the system was the Hitachi S7 - roughly equivalent to the 3083. You can get single bit errors with no external influence at all - just from quantum mechanics. You never know where an electron realy is - it's a probability thing. There is a chance that all of the electrons constituting a charge will jump to the left at once - creating a false zero or one at the output to the gate. I remember a discussion with a CPU designer. How often does this happen? Every million years or so with these transistors, more often with the smaller ones we plan in the future. Uh huh. So why is it a problem? Nineteen transistors per bit, eight bits per byte, 64MB. One single bit every couple of hours. That was about 1985. I liked the microcode store recovery system. There were two banks with identical contents, interleaved for speed. If a single bit error occured, the machine just waited for the next half-cycle and took the value from the other bank. -- Phil Payne http://www.isham-research.com +44 7785 302 803 +49 173 6242039
Re: URGENT! really low performance. A related question...
John Summerfield wrote I presume, from what you say, that Java isn't all that wonderful on zSeries? Improved CPU performance may make it so. One cannot make such blanket statements. JAVA is a language, not a workload. Yes it does have characteristics that cause it to have long path lengths. However, it also has characteristics that trash caches, particularly if the programmer takes OO programming seriously. Small caches get trashed faster than large caches particularly when they are in fast engines. The balance of pathlength and cache misses is entirely dependent upon the application in any language. Java just happens to be less efficient on all fronts than earlier languages, but then Fortran is less efficient than assembler. I would argue that the slide in code efficiency is balanced by the increase in processor speed over time for all machines. Relative capacity is more related to how the programmer writes the application and how much compute v data is involved. Long ago we used to call the ratio of Execution to Bandwidth the E/B ratio. This ratio still applies, when E/B is large the other machines will look better than z. When it is small the z shines. This is true regardless of language. Joe Temple [EMAIL PROTECTED] 845-435-6301 295/6301 cell 914-706-5211 home 845-338-8794
Re: URGENT! really low performance. A related question...
I was talking to a co-worker (A real AMD fan) about the explanations given about z chip vs. Intel (highly informative btw) and he mentioned an AMD - IBM relationship that has caused a lot of speculation: http://www.theregister.co.uk/content/3/28784.html http://www.theinquirer.net/?article=7802 Since the zSeries VM/Linux TCO model has proven popular among the business world, I wonder if IBM is conspiring to somehow increase chip speed without losing any caching benefits to create a model that will help to convert the technical world. Matt Lashley Idaho State Controller's Office
Re: URGENT! really low performance. A related question...
On Wed, 19 Feb 2003 17:33:38 +0100, Phil Payne [EMAIL PROTECTED] wrote: ECC wasn't pervasive in mainframes also. I remember hearing of a problem with a 3081 which turned out to be cosmic rays (really) which occaisionally changed bits in a channel buffer cache... which had neither parity nor ECC. The first machine I know of that detected single bit errors throughout the system was the Hitachi S7 - roughly equivalent to the 3083. You can get single bit errors with no external influence at all - just from quantum mechanics. And Lord protect you if the packaging accidently contained materials which generated gamma rays. Another tale of woe from the IBM 1980s which shutdown foundry production for several months.. You never know where an electron realy is - it's a probability thing. There is a chance that all of the electrons constituting a charge will jump to the left at once - creating a false zero or one at the output to the gate. I remember a discussion with a CPU designer. How often does this happen? Every million years or so with these transistors, more often with the smaller ones we plan in the future. Uh huh. So why is it a problem? Nineteen transistors per bit, eight bits per byte, 64MB. One single bit every couple of hours. That was about 1985. I liked the microcode store recovery system. There were two banks with identical contents, interleaved for speed. If a single bit error occured, the machine just waited for the next half-cycle and took the value from the other bank.
Re: URGENT! really low performance. A related question...
I was talking to a co-worker (A real AMD fan) about the explanations given about z chip vs. Intel (highly informative btw) and he mentioned an AMD - IBM relationship that has caused a lot of speculation: http://www.theregister.co.uk/content/3/28784.html http://www.theinquirer.net/?article=7802 A little bird whispered in my ear, Cyrix. -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
One cannot make such blanket statements. JAVA is a language, not a workload. Yes it does have characteristics that cause it to have long path lengths. However, it also has characteristics that trash caches, particularly if the programmer takes OO programming seriously. Very true, although observation indicates that there are a lot of really lazy programmers writing in Java, or ones that are simply ignorant of the effects of certain programming practices. There are also a lot of programmers that use the absolutely horrific crap that comes out of most of the integrated development environments these days without ever looking at the impact of the code on the environment. Forced use of 'lint' had it's moments -- at least it complained about the egregiously stupid stuff. Java just happens to be less efficient on all fronts than earlier languages, but then Fortran is less efficient than assembler. Interesting side note: Fortran is around 50 years old (+/- a few). It's gotten more intensive study by the compiler optimization wonks than any other language. Talk about geriatric research! 8-) -- db
Re: URGENT! really low performance. A related question...
I understand the benchmark results, but does that mean that current PC could support the same workload. At John Hancock in the early 1970s a 168 supported a fairly hefty batch workload and an online inquiry system for 400+ file clerks. If a current PC can't support that workload, what is the difference? Maybe benchmarks don't mean that much... I can run MVS 3.8 considerably faster than the 168s could. We had a 168MP to implement Medibank in the mid 70s. Getting the other software, setting it and, and getting the workload's another matter. Has anyone seen IMS DB/DC from that era? -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
cc: Sent by: Linux on 390 Subject: Re: URGENT! really low performance. A related question... Port [EMAIL PROTECTED] U 02/16/2003 08:32 PM Please respond to Linux on 390 Port But the question still does not appear to be answered - why does the mainframe have to run at such a low clock speed? Perhaps someone with some hardware knowledge could explain it? Why can't the clock be cranked up to be the same speed as the latest Pentium? ;Most of us mainframe guys understand its inherent advantages, but as someone has already commented, it often just doesn't wash with management if a cheap Pentium outperforms a million-dollar mainframe. Regards. Mark Darvodelsky Data Centre - Mainframe Facilities Royal SunAlliance Australia Phone: +61-2-99789081 Email: [EMAIL PROTECTED] End of page
Re: URGENT! really low performance. A related question...
On Tue, 18 Feb 2003 09:43:43 -0800, Fargusson.Alan [EMAIL PROTECTED] wrote: I am not sure about the P4, but earlier chip did not pass the ECC bits through the processor bus, so you could not detect data errors between the processor and memory. This prevents one from getting Mainframe reliability with an Intel processor. ECC wasn't pervasive in mainframes also. I remember hearing of a problem with a 3081 which turned out to be cosmic rays (really) which occaisionally changed bits in a channel buffer cache... which had neither parity nor ECC. I worked on an Amdahl machine once (customer machine that got cooked) and the last problem was am LRA that gave bad results when the index register was used as the source. That path through the chips had shorted (because of heat) and the result was always zero. No parity/ecc there either. john alvord
Re: URGENT! really low performance. A related question...
On Mon, 2003-02-17 at 00:08, John Summerfield wrote: Convert your favorite CICS app to the Windows world, connect 25000 concurrent user sessions and watch the clock - then come back and tell us how long the Intel box(ES) stayed alive under that realistic load. It boils down to this, at the end of the day the mainframe is still running when the Intel units have had to be rebooted multiple time. This goes without stating that the number of Intel machines it would take to Linux is Linux. Don't confuse Windows' reliability with the reliability of IA32-based boxes. They can be built to be very reliable indeed, and even th e cheapest PC clones today are much more reliable than mainframes of years go ne by. The mention of Windows in this reply was only used as a fair example since this is still the predominant OS installed on Intel gear, this mention was not offered as a comparison between OSes - no confusion here, sorry if I confused you. My point is you should not confuse the reliability of the software with the reliability of the hardware. PC crashes are rarely caused by hardware. -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
I just IPL'ed the S/390 Sunday 2/9/03 it was up since we installed our new MP3000 1/9/02 that's January 9, 2002. I IPLed to install Z/VM 4.3.0 (Scheduled Change) -Original Message- From: Ken Dreger [SMTP:[EMAIL PROTECTED]] Sent: Friday, February 14, 2003 4:45 PM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... Robert, you are buying into the thought that the PC is mans best friend in the Business world. Let me ask you, how many times during the day do your PC users re-boot ? then tell us how many times you IPLED your s390 system, (that crashed and burned, not because of scheduled changes) in the last year ??? And then tell us how many users your s390 supports daily, without a single complaint No Blue screens of death.. sorry, the PC folks just don't get it and until they experience in person a Live well mainframe...they won't get it in their minds the PC runs circles around the Mainframe. And in some cases it does !!, but for the 99.99 other % of the work the mainframe is the Energizer Bunny !! Just had to get that off my chest, since it is Friday. Ken Dreger At 03:32 PM 2/14/2003 -0600, you wrote: When IBM first approached us about Linux/390 and an IFL, one of the first applications mentioned was print serving. Should be a fairly I/O bound task with lots of free time, right? Well we found out that on our print servers, serving our 15,000 printers, there's very little idle time to be had, making print serving a completely compute-bound limited task. So the comparison between the current print servers and Linux/390 was a disaster, and the Unix people here never went any further. The whole trial died on the vine, at least for them, right at the first print server test. In any case, my point is, why do the mainframe CPUs *have* to be soo slow? Why can't they be beefed up to the point that they're at least ball park competitive, so that things like our trial don't happen? Why can't they be beefed up so that instead of having to buy a five way processor to do our work, we could get a two or three way, and spend less cash? If the separation of CPU and I/O computing is so great, then wouldn't it just be greater if the CPU portion could keep up with a PC? Or even see the PC's tail at the end of the race? Is separation of CPU and I/O processing really that important, when the PC toys can do both computing and I/O in their single CPU, faster than we can on our separated computing and I/O CPUs? I'm having a really hard time selling the concept to people here. You say that the PC spends 90% of its CPU time on I/O tasks If that's really true, then we're really in trouble, because it spends only 10% of its CPU power on the task at hand, and still has double the throughput of a single-IFL mainframe when both are dedicated to serving printers. And that is the statistic that we're trying to fight against here. I know the whole I/O is separated story; But I'm just tired of being laughed at by the Intel-minded people in the Unix and NT world here. Robert P. Nixinternet: [EMAIL PROTECTED] Mayo Clinic phone: 507-284-0844 RO-CE-8-857page: 507-270-1182 200 First St. SW Rochester, MN 55905 Codito, Ergo Sum In theory, theory and practice are the same, but in practice, theory and practice are different. -Original Message- From: Wolfe, Gordon W [SMTP:[EMAIL PROTECTED]] Sent: Friday, February 14, 2003 11:16 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... There's also the fact that your cheapo-cheapo PC has one processor and has to do all the I/O for itself. The PC's processor spends 90% of its time handling I/O, formatting data for some port or the screen, running a driver program, polling and waiting for a response from some peripheral and so on. Mainframes hand the I/O off to the I/O subsystem processor, which hands it off to the channel processors (Last I heard, an ESCON channel used the same processor chip as the Macintosh, but that's been a while) which hands it off to the controller for the device. You've got a lot of processors working for you, and everything's cached along the way so you may not even be doing any real I/O half the time. The point is, the central processor has very little to do with any I/O processing. Someone once told me that my 9672-R36 with three processors at 117 mips each should, with all the I/O processors, actually be rated at around 30,000 mips. But that 30,000 is for I/O only, the other 351 mips are for computing only. Use the right tool for the job at hand. Don't try to use a pair of pliers for a wrench. They say there are three signs of stress in your life. You eat too
Re: URGENT! really low performance. A related question...
On Mon, 2003-02-17 at 22:47, John Summerfield wrote: My point is you should not confuse the reliability of the software with the reliability of the hardware. PC crashes are rarely caused by hardware. With my OS vendor hat on I would disagree. Significant numbers of problems reported to Red Hat are caused by - Bad RAM (people don't run ram testers on PCs before shipping it seems) - Cooling problems - Running large boxes on small PSUs - Faulty disks - Inability of the BIOS vendor to read specifications The top three all come down to cutting corners. The disk stuff is a nightmare. IDE reliability has become so bad that a lot of people raid1 IDE disks in pairs routinely. Linux has not just been hit by bugs, we've actually found hardware bugs that the vendor then took six months to acknowledge. Similarly 'crashme' found processor bugs vendors didn't know about. Alan
Re: URGENT! really low performance. A related question...
On Tue, 2003-02-18 at 00:36, Tzafrir Cohen wrote: Replace your faulty hardware. It's cheap. Or spend a bit more, and get a case without those cooling problems. For desktop PC's especially in the Linux world where X and the like mean the cpu crunching is done on another box I'm meeting more and more people who are solving the cooling problem by using ultra-cheap VIA fanless PC's A desktop PC costs $200 in bulk. Thats at the point where its actually questionable use of staff time to even try and repair one. Just do a deal with a recycler or charity to rip out the disk and resell/recycle /reuse the rest. The same is happening with a lot of this technology. DVD region codes used to be a real pain, now everyone just spends $20 on a 2nd hand slow DVD drive for the other regions they care about Alan
Re: URGENT! really low performance. A related question...
I think that we, and IBM, have taken to resting on our laurels, and we all refuse to notice that these cheap, unreliable toys are catching up to the curve. Most of our excuses work today still, but in another year or two, I'm not so sure. And I'm finding it hard right now to stand in front of a group and tell them that they're better off serving web pages on a million dollar server, when those same pages can be served by a $299 machine. It takes a whole lot of virtual Linux images to reach the TOC of a $299 machine. If you are allowing $299 to be the discussion point, the, yes, your laurels have been smashed flat, indeed! ;-) $299 is NOT the total cost of ownership (TCO) of the machine. Utilities, people, network infrastructure, real estate, etc. all are part of TOC, too. (Watch those people costs, btw) Focusing on the technology will lead you down the proverbial garden path. Focus on the *business*. When you look at total I/T spending as part of your business, assuming you know where them money goes (big assumption!), then it becomes more obvious when mainframes should at least be considered. The technology is just a way to affect the TCO. But as long a the conversation is limited to *acquisition price* instead of *cost of ownership*, then no meaningful discussion of the role of mainframes can be had. Alan Altmark Sr. Software Engineer IBM z/VM Development
Re: URGENT! really low performance. A related question...
Basically there are a bunch of things that make up TCO. In a mainframe solution the hardware makes up more of the costs, people, network infrastructure, etc make up less. In a PC server solution it is reversed. TCO is a very hard thing to define. I think the mainframe has the deck stacked against it from the standpoint of a lot of people only looking at the price of the hardware and thinking they can get by with a PC server. I think you really have to do your homework to convince people the mainframe is the better solution. -Original Message- From: Alan Altmark [mailto:[EMAIL PROTECTED]] Sent: Tuesday, February 18, 2003 10:48 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... I think that we, and IBM, have taken to resting on our laurels, and we all refuse to notice that these cheap, unreliable toys are catching up to the curve. Most of our excuses work today still, but in another year or two, I'm not so sure. And I'm finding it hard right now to stand in front of a group and tell them that they're better off serving web pages on a million dollar server, when those same pages can be served by a $299 machine. It takes a whole lot of virtual Linux images to reach the TOC of a $299 machine. If you are allowing $299 to be the discussion point, the, yes, your laurels have been smashed flat, indeed! ;-) $299 is NOT the total cost of ownership (TCO) of the machine. Utilities, people, network infrastructure, real estate, etc. all are part of TOC, too. (Watch those people costs, btw) Focusing on the technology will lead you down the proverbial garden path. Focus on the *business*. When you look at total I/T spending as part of your business, assuming you know where them money goes (big assumption!), then it becomes more obvious when mainframes should at least be considered. The technology is just a way to affect the TCO. But as long a the conversation is limited to *acquisition price* instead of *cost of ownership*, then no meaningful discussion of the role of mainframes can be had. Alan Altmark Sr. Software Engineer IBM z/VM Development
Re: URGENT! really low performance. A related question...
I am not sure about the P4, but earlier chip did not pass the ECC bits through the processor bus, so you could not detect data errors between the processor and memory. This prevents one from getting Mainframe reliability with an Intel processor. -Original Message- From: Scott Courtney [mailto:[EMAIL PROTECTED]] Sent: Monday, February 17, 2003 1:12 PM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... On Monday 17 February 2003 01:08 pm, Adam Thornton wrote: On Mon, Feb 17, 2003 at 04:08:48PM +0800, John Summerfield wrote: Linux is Linux. Don't confuse Windows' reliability with the reliability of IA32-based boxes. They can be built to be very reliable indeed, and even the cheapest PC clones today are much more reliable than mainframes of years gone by Yeah, but gone *way* by. Reliability in consumer-grade machines is pretty dreadful. Even high-end (e.g., Compaq and Dell server-grade machines) Intel boxen don't seem to have the kind of quality in connectors and cables that mainframes do. And for some reason the Intel world seems to like to put cheap cooling fans with poor bearings into even (for this arena) expensive machines. I've also seen less attention paid, in the Intel world, to issues like circuit board mounting rigidity, which can allow slight flexing of the board during initial assembly or component replacement. Chips rarely wear out, but board failures still happen. Why? Mechanical and thermal problems with the boards and the chassis environment. Microcracks in solder connections on boards. Vibration- induced failures of IC bondout pad welds. Static. And so on. I think it would be entirely possible to build an Intel machine that is as reliable as a zSeries. It would end up costing just about the same, because most of the cost isn't in the CPU chip itself. Sometimes you do, in fact, get what you pay for. Scott -- - Scott D. Courtney, Senior Engineer Sine Nomine Associates [EMAIL PROTECTED] http://www.sinenomine.net/
Re: URGENT! really low performance. A related question...
Intel is in a unique position to do research into speeding up processors, since they have a large revenue stream from a single product. Also they got the engineering team from DEC, which seems to have a talent for creating fast processors like the Alpha. It isn't just the Mainframe which has slower clocks. The SPARC chips, HP-PA, Power, and even Itanium chips all run about the same speed, and seem to always be slower than the Pentium. -Original Message- From: Mark Darvodelsky [mailto:[EMAIL PROTECTED]] Sent: Sunday, February 16, 2003 5:33 PM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... But the question still does not appear to be answered - why does the mainframe have to run at such a low clock speed? Perhaps someone with some hardware knowledge could explain it? Why can't the clock be cranked up to be the same speed as the latest Pentium? Most of us mainframe guys understand its inherent advantages, but as someone has already commented, it often just doesn't wash with management if a cheap Pentium outperforms a million-dollar mainframe. Regards. Mark Darvodelsky Data Centre - Mainframe Facilities Royal SunAlliance Australia Phone: +61-2-99789081 Email: [EMAIL PROTECTED] CAUTION - This message is intended for the addressee named above It may contain privileged or confidential information. If you are not the intended recipient of this message you must not use, copy, distribute or disclose it to anyone other than the addressee. If you have received this message in error please return the message to the sender by replying to it and then delete the message from your computer. Internet emails are not necessarily secure. Royal SunAlliance does not accept responsibility for changes made to this message after it was sent
Re: URGENT! really low performance. A related question...
On Tue, 18 Feb 2003 00:19:12 -0500, Adam Thornton [EMAIL PROTECTED] wrote: On Tue, Feb 18, 2003 at 02:36:21AM +0200, Tzafrir Cohen wrote: Replace your faulty hardware. It's cheap. Or spend a bit more, and get a case without those cooling problems. Yes, of course it's cheap. 'S'why I bought it. And I'll buy a new machine eventually, at a similarly low price point, because I'm cheap. Point is, *most* PC hardware is cheap. Because it, you know, costs less that way. Adam I puzzled about all this for a long time. One example was a 4341 versus a Vax/780. The performance sheets I looked at said they were about equal in performance. But the 4341 was much more capable in real work situations, given equal workload. Eventually I noticed that the 4341 could do about 10 times the I/O (megabytes per second) compared to the Vax machine. Vax was limited to about 100K bytes per second and 4341 was 1meg bytes per second. So I propose that when analysing different architectures we go beyond simple CPU benchmarks and also calculated the 1) memory bandwidth and 2) I/O bandwidth. So that hot PC might be great for CPU but might be left in the dust in memory bandwidth and I/O bandwidth. That type of analysis would explain why a 168 was so capable even though the CPU benchmark (compared to a 2Ghz Intel) would predict otherwise. Programming efficiency probably has a measurable effect too... C++ versus hand crafted assembler can easily add a 5-10 times efficiency differential in my experience. Hey - IBM - I figured out that when I was working for IBM Research in Yorktown in 1983-88 - so if someone wants to grab the idea and use it in marketting... it's yours. I imagine a bit heavy tank with 2000 horsepower duking it out with a compact... half plastic... auto. grin john
Re: URGENT! really low performance. A related question...
Mark Drvodelsky wrote But the question still does not appear to be answered - why does the mainframe have to run at such a low clock speed? The answer to your question has to do with how chip real estate is used. In a zSerires micro processor the primary usage of area is for large L1 caches and error detection/recovery hardware. Basically, increases in cache size result in decreases in clock rate. This is because there is more load on the critical signals. Secondly, to date the zSeries microprocessor pipleline does not do super scalar processing. That is it finishes 1 instruction per cyle at best. This is because it takes consideratbly more work and hardware to do mainframe style error recovery functions when more than 1 instruction can complete in a cyc;le. While super scalar execution does not help with clock speed it does help with cpu intense measurements like SPECint. However, since the cache is larger the zSeries will wait for memory less often than other machines.Metrics like SPECint and MHz ignore cache misses. So the question becomes how much are the caches missing? The more they miss the better the zSeries looks. This is very workload dependent. One driver of cache misses is context switches; another is I/O. If you attempt to make an Intel server very busy, the cache miss rate will climb, causing throughput to saturate, unless the work is very CPU intense and cache working set per transaction or per user is very small. The reason the Robert Nix's print server dabacle occured is that IBM made the mistake of treating Samba file/print as a single type of workload. We didn't understand at the time that a print server can behave like a network to network prototcol server. These servers actually move very little data through the cpu. Such a machine has very little context switching and the I/O is network to network which will actually drive very little data through the caches. The combination makes the workload cpu intense and if busy a bad candidate for Linux/z. By contrast a Samba file server can be doing enough disk to network I/O which pushes more data through the caches changing blocks to packets. This can cause distributed servers can get I/O and cache bound. Samba can be either CPU or I/O intense, and the single context makes the cpu intense workloads unattractive for z particularly if the machines are busy. So the answer to your question is that we could build a zSeries microprocessor which is as fast as any other processor, but to do so would cause us to lose the fundamental strengths in context switching, data caching and I/O. There is alwasy a trade off between speed and capacity. zSeries favors capacity; Intel favors speed. How much L1 cache should be given up to increase the clock rate? How much RAS and recovery function should be given up to improve SPECint? We have seen this situation improve over time, and IBM will continue to improve its microprocessor design, but zSeries cannot simply abandon strength in large working set workloads to crank up the clock speed and/or instruction rate for workoads with small working sets. This particularly true when the virtualization and workload management which drive consolidation and mixed workloads is dependent on the very hardware capabilities that would have to be given up. Joe Temple [EMAIL PROTECTED] 845-435-6301 295/6301 cell 914-706-5211 home 845-338-8794 Mark Darvodelsky Mark_Darvodelsky@royalTo: [EMAIL PROTECTED] sun.com.aucc: Sent by: Linux on 390 Subject: Re: URGENT! really low performance. A related question... Port [EMAIL PROTECTED] U 02/16/2003 08:32 PM Please respond to Linux on 390 Port But the question still does not appear to be answered - why does the mainframe have to run at such a low clock speed? Perhaps someone with some hardware knowledge could explain it? Why can't the clock be cranked up to be the same speed as the latest Pentium? Most of us mainframe guys understand its inherent advantages, but as someone has already commented, it often just doesn't wash with management if a cheap Pentium outperforms a million-dollar mainframe. Regards. Mark Darvodelsky Data Centre - Mainframe Facilities Royal SunAlliance Australia Phone: +61-2-99789081 Email: [EMAIL PROTECTED] CAUTION - This message is intended for the addressee named above It may contain privileged or confidential information. If you are not the intended recipient of this message you must not use, copy, distribute or disclose it to anyone other than the addressee. If you have received this message in error please return the message to the sender by replying to it and then delete the message from your computer. Internet emails
Re: URGENT! really low performance. A related question...
Ryan Ware wrote: Basically there are a bunch of things that make up TCO. In a mainframe solution the hardware makes up more of the costs, people, network infrastructure, etc make up less. In a PC server solution it is reversed. TCO is a very hard thing to define. I think the mainframe has the deck stacked against it from the standpoint of a lot of people only looking at the price of the hardware and thinking they can get by with a PC server. I think you really have to do your homework to convince people the mainframe is the better solution. I have more of a problem justifying migrating a existing group 38 system to an entry level group 38 z800 or to a group 80 entry level system on a z900 when my management compares the software licensing costs from various vendors we use to process what is essentially a static workload. Every time a new mainframe hardware platform is announced the entry level group is higher in performance and associated software costs than the previous generation. How many small to medium mainframe shops did IBM loose because of the zSeries software pricing differences? What about third party vendors? How many of them have lost clients because of tiered pricing? Sure, zVM is lower in cost on zSeries and Linux is virtually free but what about those shops running CA or other vendor products looking at a two or more tier jump in pricing to process the same workload on a new machine? Why not say the entry level is the lowest processor model and make it a group 10 no matter what the mip rating and leave the software pricing alone? How many shops would keep or buy new mainframes if you only had to pay group 10 pricing for what is now a group 38 box? How many shops would look for new workloads to migrate to the mainframe to utilize the spare horsepower? The idea is to grow the market not stunt it with sort term profits. An investment in any mainframe is for long term processing requirements. Those mainframe clients want to stay around and not have the data center viewed as a purveyor of the platform du jour or fad pushers. My $0.02USD... Bill Stermer ACS - City of Anaheim
Re: URGENT! really low performance. A related question...
This is one reason we are moving away from CA products. -Original Message- From: Bill Stermer [mailto:[EMAIL PROTECTED]] Sent: Tuesday, February 18, 2003 11:39 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... Ryan Ware wrote: Basically there are a bunch of things that make up TCO. In a mainframe solution the hardware makes up more of the costs, people, network infrastructure, etc make up less. In a PC server solution it is reversed. TCO is a very hard thing to define. I think the mainframe has the deck stacked against it from the standpoint of a lot of people only looking at the price of the hardware and thinking they can get by with a PC server. I think you really have to do your homework to convince people the mainframe is the better solution. I have more of a problem justifying migrating a existing group 38 system to an entry level group 38 z800 or to a group 80 entry level system on a z900 when my management compares the software licensing costs from various vendors we use to process what is essentially a static workload. Every time a new mainframe hardware platform is announced the entry level group is higher in performance and associated software costs than the previous generation. How many small to medium mainframe shops did IBM loose because of the zSeries software pricing differences? What about third party vendors? How many of them have lost clients because of tiered pricing? Sure, zVM is lower in cost on zSeries and Linux is virtually free but what about those shops running CA or other vendor products looking at a two or more tier jump in pricing to process the same workload on a new machine? Why not say the entry level is the lowest processor model and make it a group 10 no matter what the mip rating and leave the software pricing alone? How many shops would keep or buy new mainframes if you only had to pay group 10 pricing for what is now a group 38 box? How many shops would look for new workloads to migrate to the mainframe to utilize the spare horsepower? The idea is to grow the market not stunt it with sort term profits. An investment in any mainframe is for long term processing requirements. Those mainframe clients want to stay around and not have the data center viewed as a purveyor of the platform du jour or fad pushers. My $0.02USD... Bill Stermer ACS - City of Anaheim
Re: URGENT! really low performance. A related question...
Bill, You make some good points. Phil Payne regularly posted on this issue on IBM-Main. If you look at the cost of hardware and how much it has gone down, and compare that to the price of the software, something is rotten. IBM isn't going to lose big insurance companies, big banks, and other large corporations, however not many new customers when looking at the price of z/OS are going to buy mainframes and put z/OS on them. They did a good thing with z/OS lite - if you don't run Cobol or CICS and maybe a few other things, it costs about 10% of the regular z/OS. Why can't IBM price their software cheaper as the hardware costs come down? We had a 3090-600S until 4 years ago. When that machine first came out, the list price was $11,000,000. A little over a year ago, we got an MP3000-H50, which the purchase price is less than $200,000. Both machines are very close in perforance. That's a 50 to 1 reduction in hardware cost. If IBM priced their software the same way, it would eventually become almost as popular as Windows. (Well, maybe not quite). Eric Bielefeld Sr. MVS Systems Programmer PH Mining Equipment Milwaukee, WI 414-671-7849 [EMAIL PROTECTED] [EMAIL PROTECTED] 02/18/03 01:38PM I have more of a problem justifying migrating a existing group 38 system to an entry level group 38 z800 or to a group 80 entry level system on a z900 when my management compares the software licensing costs from various vendors we use to process what is essentially a static workload. Every time a new mainframe hardware platform is announced the entry level group is higher in performance and associated software costs than the previous generation. How many small to medium mainframe shops did IBM loose because of the zSeries software pricing differences? What about third party vendors? How many of them have lost clients because of tiered pricing? Sure, zVM is lower in cost on zSeries and Linux is virtually free but what about those shops running CA or other vendor products looking at a two or more tier jump in pricing to process the same workload on a new machine? Why not say the entry level is the lowest processor model and make it a group 10 no matter what the mip rating and leave the software pricing alone? How many shops would keep or buy new mainframes if you only had to pay group 10 pricing for what is now a group 38 box? How many shops would look for new workloads to migrate to the mainframe to utilize the spare horsepower? The idea is to grow the market not stunt it with sort term profits. An investment in any mainframe is for long term processing requirements. Those mainframe clients want to stay around and not have the data center viewed as a purveyor of the platform du jour or fad pushers. My $0.02USD... Bill Stermer ACS - City of Anaheim + This electronic mail transmission contains information from P H Mining Equipment which is confidential, and is intended only for the use of the proper addressee. If you are not the intended recipient, please notify us immediately at the return address on this transmission, or by telephone at (414) 671-4400, and delete this message and any attachments from your system. Unauthorized use, copying, disclosing, distributing, or taking any action in reliance on the contents of this transmission is strictly prohibited and may be unlawful. +
Re: URGENT! really low performance. A related question...
Alan, Good for you! That is our goal and Linux/Open Source products appear to have the potential to help us accomplish this task, not only for CA but MS, Oracle, etc. as well! I have watch our software budget increase dramatically in order to process the same workload year after year using the same products from the same vendors. It's like trying to stop a bleeding artery with one of those little round Bandaids in some cases. Bill Stermer ACS - City of Anaheim -Original Message- From: Fargusson.Alan [mailto:[EMAIL PROTECTED]] Sent: February 18, 2003 11:55 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... This is one reason we are moving away from CA products.
Re: URGENT! really low performance. A related question...
I just IPL'ed the S/390 Sunday 2/9/03 it was up since we installed our new MP3000 1/9/02 that's January 9, 2002. I IPLed to install Z/VM 4.3.0 (Scheduled Change) That tells me you weren't current with your maintenance;-) If you looked at the security advisories and decided they were not needed, that's fine. However, I suspect that many people who report how long *their* systems have been up have neglected their maintenance. -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
On Tue, Feb 18, 2003 at 02:36:21AM +0200, Tzafrir Cohen wrote: Replace your faulty hardware. It's cheap. Or spend a bit more, and get a case without those cooling problems. Yes, of course it's cheap. 'S'why I bought it. And I'll buy a new machine eventually, at a similarly low price point, because I'm cheap. Point is, *most* PC hardware is cheap. Because it, you know, costs less that way. In my experience, most cheap PC hardware is somewhat better than that. My Athlon - I bought a collection of bits, pored over the assembly instructions and built it myself. My home server is, it's running a super socket 7 mobo with a K6-2-500, and I built that too. I've had a couple of IBM drives fail, those were bad models. -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
On Sun, 2003-02-16 at 17:32, Mark Darvodelsky wrote: But the question still does not appear to be answered - why does the mainframe have to run at such a low clock speed? Perhaps someone with some hardware knowledge could explain it? Why can't the clock be cranked up to be the same speed as the latest Pentium? This has everything to do with heat dissipation and the media capacity of the processors themselves. Does IBM have the capability to make processors that will run faster, yes. Will they, not without due overcompensation. Look at the history, the AT was running at 6Mhz while every other AT clone manufacturer was running at 8 and 12 - the same went for all of the IBM x86 boxes made. It's easier to go faster when you have newer technology. That said, the original PC and PC/XT were pretty feeble, using the 8088 when the 8086 was available first, and faster. Most of us mainframe guys understand its inherent advantages, but as someone has already commented, it often just doesn't wash with management if a cheap Pentium outperforms a million-dollar mainframe. Convert your favorite CICS app to the Windows world, connect 25000 concurrent user sessions and watch the clock - then come back and tell us how long the Intel box(ES) stayed alive under that realistic load. It boils down to this, at the end of the day the mainframe is still running when the Intel units have had to be rebooted multiple time. This goes without stating that the number of Intel machines it would take to Linux is Linux. Don't confuse Windows' reliability with the reliability of IA32-based boxes. They can be built to be very reliable indeed, and even the cheapest PC clones today are much more reliable than mainframes of years gone by. -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
On Mon, 2003-02-17 at 00:08, John Summerfield wrote: Convert your favorite CICS app to the Windows world, connect 25000 concurrent user sessions and watch the clock - then come back and tell us how long the Intel box(ES) stayed alive under that realistic load. It boils down to this, at the end of the day the mainframe is still running when the Intel units have had to be rebooted multiple time. This goes without stating that the number of Intel machines it would take to Linux is Linux. Don't confuse Windows' reliability with the reliability of IA32-based boxes. They can be built to be very reliable indeed, and even the cheapest PC clones today are much more reliable than mainframes of years gone by. The mention of Windows in this reply was only used as a fair example since this is still the predominant OS installed on Intel gear, this mention was not offered as a comparison between OSes - no confusion here, sorry if I confused you.
Re: URGENT! really low performance. A related question...
On Sun, 16 Feb 2003, John Summerfield wrote: There's also the fact that your cheapo-cheapo PC has one processor and has to do all the I/O for itself. The PC's processor spends 90% of its time handli ng I/O, formatting data for some port or the screen, running a driver program , polling and waiting for a response from some peripheral and so on. I don't pretend that my Athlon-based system's overall design is anything like as good as the S370/168 I used to use so many years ago, but fair go. My PC has the on-board EIDE interfaces (EIDE{0,1}) and additionally, an add-on PCI card providing two more EIDE ports. At one time I had three drives in the box on each of three interfaces. I was running DD to do a disk-to-disk copy, and while it was running, I used hdparm to test the speed of the third drive. It tested at 35 Mbytes/sec, pretty close to its rated speed. My graphics card has its own processor, and if I add a SCSI card that too offloads a decent amount of work. Devices use interrupts to signal the end of operations, and many use DMA devices to provide direct access to system RAM. While IBM's mainframes do all these things better (except compute), if an IA32 system uses more than about five percent of the CPU power to drive devices, the OS is broken. On Linux, we use (mostly) the same software you do. It does not need lots of CPU power to drive most I/O devices. I understand the benchmark results, but does that mean that current PC could support the same workload. At John Hancock in the early 1970s a 168 supported a fairly hefty batch workload and an online inquiry system for 400+ file clerks. If a current PC can't support that workload, what is the difference? Maybe benchmarks don't mean that much... john alvord
Re: URGENT! really low performance. A related question...
But the users never see, or even realize, that 24,999 other people are using the same box they are. They see their one task; the thing they want to get done. And, if a single user Intel can do that task at twice the speed of the mainframe, and the task takes any noticeable amount of time, then they'll want the PC every time, and no amount of talking or explaining will do anything to talk them out of it. Sure it's less reliable. (But they're getting better) Sure it costs more overall. (But they're getting cheaper) Sure the opsys really sucks. (But now there are alternatives) We need to face the fact that Personal Computers have branched out and are coming of age. The mainframe is going to need to keep ahead of the curve if it is to continue to command the million dollar price tag. I think that we, and IBM, have taken to resting on our laurels, and we all refuse to notice that these cheap, unreliable toys are catching up to the curve. Most of our excuses work today still, but in another year or two, I'm not so sure. And I'm finding it hard right now to stand in front of a group and tell them that they're better off serving web pages on a million dollar server, when those same pages can be served by a $299 machine. It takes a whole lot of virtual Linux images to reach the TOC of a $299 machine. I have to go today to explain why we need to spend $100,000 for a web application server, when Tomcat is available for free. It's getting to be a hard sell to stay with IBM. Robert P. Nixinternet: [EMAIL PROTECTED] Mayo Clinic phone: 507-284-0844 RO-CE-8-857page: 507-270-1182 200 First St. SW Rochester, MN 55905 Codito, Ergo Sum In theory, theory and practice are the same, but in practice, theory and practice are different. -Original Message- From: Steven A. Adams [SMTP:[EMAIL PROTECTED]] Sent: Sunday, February 16, 2003 8:36 PM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... On Sun, 2003-02-16 at 17:32, Mark Darvodelsky wrote: Convert your favorite CICS app to the Windows world, connect 25000 concurrent user sessions and watch the clock - then come back and tell us how long the Intel box(ES) stayed alive under that realistic load. It boils down to this, at the end of the day the mainframe is still running when the Intel units have had to be rebooted multiple time. This goes without stating that the number of Intel machines it would take to replace that big chunk of iron would cost just as much in hardware and require at least 4 times the support layer to keep the monster alive. TCO rules here. All of this from someone that has spent most of the last 20 years on micro and mid-range machines, interesting perspective huh.
Re: URGENT! really low performance. A related question...
Although, I must add Windows is improving. Our payroll app runs on Win2k and has an uptime of just over 100 days. Much better than we ever achieved when the same app ran on windows NT. It is still short of our Unix performance, and from what I am reading far short of Mainframe reliability. Guess I've got a case of Mainframe envy;) I mainly lurk here to learn about mainframes and linux. I was on a local LUG list, but it was too often going down the path of flames and tastes great less filling type exchanges. -Original Message- From: Steven A. Adams [SMTP:[EMAIL PROTECTED]] Sent: Monday, February 17, 2003 10:18 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... On Mon, 2003-02-17 at 00:08, John Summerfield wrote: Convert your favorite CICS app to the Windows world, connect 25000 concurrent user sessions and watch the clock - then come back and tell us how long the Intel box(ES) stayed alive under that realistic load. It boils down to this, at the end of the day the mainframe is still running when the Intel units have had to be rebooted multiple time. This goes without stating that the number of Intel machines it would take to Linux is Linux. Don't confuse Windows' reliability with the reliability of IA32-based boxes. They can be built to be very reliable indeed, and even the cheapest PC clones today are much more reliable than mainframes of years gone by. The mention of Windows in this reply was only used as a fair example since this is still the predominant OS installed on Intel gear, this mention was not offered as a comparison between OSes - no confusion here, sorry if I confused you.
Re: URGENT! really low performance. A related question...
On Mon, 2003-02-17 at 16:43, John Alvord wrote: I understand the benchmark results, but does that mean that current PC could support the same workload. At John Hancock in the early 1970s a 168 supported a fairly hefty batch workload and an online inquiry system for 400+ file clerks. If a current PC can't support that workload, what is the difference? Maybe benchmarks don't mean that much... I deal with at least one organisation who recently hit problems getting over 100 thin client desktop users running on one PC. Not a system limit, but a scaling issue with one resource. A modern PC can deliver a lot of CPU grunt and with decent I/O cards quite a bit of throughput. OTOH if you get bad ram, it falls over. If the cpu cache begins to fail you probably get bad data. Alan
Re: URGENT! really low performance. A related question...
On Mon, Feb 17, 2003 at 04:08:48PM +0800, John Summerfield wrote: Linux is Linux. Don't confuse Windows' reliability with the reliability of IA32-based boxes. They can be built to be very reliable indeed, and even the cheapest PC clones today are much more reliable than mainframes of years gone by Yeah, but gone *way* by. Reliability in consumer-grade machines is pretty dreadful. Especially when you start running them at reasonable loads 24/7, instead of in a desktop situation, where they're usually 99%+ idle, or even in a typical server situation, where average utilization is between 5 and 10 percent. Modern high-capacity drives, in particular, generate a LOT of heat; combine that with power supplies that don't come anywhere close to meeting spec, CPUs that generate somewhere near 100W of waste heat, and crappy (and often hideously underspecced) fans/cooling systems, and thermally-induced failure becomes awfully common. Add this to an inadequately-cooled environment (like most corner-cutting machine rooms), and you're looking for trouble. Adam
Re: URGENT! really low performance. A related question...
I deal with at least one organisation who recently hit problems getting over 100 thin client desktop users running on one PC. Not a system limit, but a scaling issue with one resource. A modern PC can deliver a lot of CPU grunt and with decent I/O cards quite a bit of throughput. OTOH if you get bad ram, it falls over. If the cpu cache begins to fail you probably get bad data. It's surprising how many PCI cards don't propagate parity. -- Phil Payne http://www.isham-research.com +44 7785 302 803 +49 173 6242039
Re: URGENT! really low performance. A related question...
This has everything to do with heat dissipation and the media capacity of the processors themselves. Does IBM have the capability to make processors that will run faster, yes. Will they, not without due overcompensation. Look at the history, the AT was running at 6Mhz while every other AT clone manufacturer was running at 8 and 12 - the same went for all of the IBM x86 boxes made. It's easier to go faster when you have newer technology. That said, the original PC and PC/XT were pretty feeble, using the 8088 when the 8086 was available first, and faster. Let's be fair here -- the 8086 also required double the decoding logic and memory chips, which would have driven the cost of the PC and XT even higher than they were. Those things were *expensive* in those days (16K of DRAM (9 chips, 8+1 parity) was easily $500), and would have easily made the PC uncompetitive. Also, we didn't have the wide acceptance of the personal computer in those days -- it was a rare bird that would even consider it. IBM has made a business out of guaranteeing reliability, availability, and serviceability -- which is usually fundamentally incompatible with having the latest and greatest speeds and feeds (reliable/fast/cheap -- pick two). I buy IBM equipment for the instrumentation and ease of service, not for performance. That's always been the tradeoff -- buy HP/Compaq if you want the raw speed, but buy IBM if you want it to be manageable and reliable. I think the same applies here. -- db
Re: URGENT! really low performance. A related question...
On Monday 17 February 2003 01:58 pm, Phil Payne wrote: It's surprising how many PCI cards don't propagate parity. And parity isn't even all that good for detecting errors. Worse, what do you do if you detect a parity error on a RAM location? You have no correction code with parity -- you need Hamming or some other ECC code for that. So with the parity situation, we have a system that's intrinsically not all that good, and then we compound the problem by not fully implementing it, as Phil points out. I wonder if that would impact reliability? Scott (Now removing tongue from cheek) -- - Scott D. Courtney, Senior Engineer Sine Nomine Associates [EMAIL PROTECTED] http://www.sinenomine.net/
Re: URGENT! really low performance. A related question...
Greetings; The first thing to do upon encountering an error is to stop. Anything beyond that is icing on the cake, a product enhancement. Halting is the bare minimum and an absolute requirement. Stopping the process/system/machine at least prevents trashing the data and allows the failed component to be located and repaired or replaced. Been there, had it not happen. That is, nothing stopped. Ouch! Extreme understatement! Good Luck! Dennis Scott Courtney scourtney@sinen To: [EMAIL PROTECTED] omine.net cc: Sent by: Linux Subject: Re: URGENT! really low performance. A related question... on 390 Port [EMAIL PROTECTED] RIST.EDU 02/17/2003 03:14 PM Please respond to scourtney On Monday 17 February 2003 01:58 pm, Phil Payne wrote: It's surprising how many PCI cards don't propagate parity. And parity isn't even all that good for detecting errors. Worse, what do you do if you detect a parity error on a RAM location? You have no correction code with parity -- you need Hamming or some other ECC code for that. So with the parity situation, we have a system that's intrinsically not all that good, and then we compound the problem by not fully implementing it, as Phil points out. I wonder if that would impact reliability? Scott (Now removing tongue from cheek) -- - Scott D. Courtney, Senior Engineer Sine Nomine Associates [EMAIL PROTECTED] http://www.sinenomine.net/
Re: URGENT! really low performance. A related question...
This has everything to do with heat dissipation and the media capacity of the processors themselves. Does IBM have the capability to make processors that will run faster, yes. Will they, not without due overcompensation. Look at the history, the AT was running at 6Mhz while every other AT clone manufacturer was running at 8 and 12 - the same went for all of the IBM x86 boxes made. It's easier to go faster when you have newer technology. That said, the original PC and PC/XT were pretty feeble, using the 8088 when the 8086 was available first, and faster. Let's be fair here -- the 8086 also required double the decoding logic and memory chips, which would have driven the cost of the PC and XT even higher than they were. Those things were *expensive* in those days (16K of DRAM (9 chips, 8+1 parity) was easily $500), and would have easily made the PC uncompetitive. Also, we didn't have the wide acceptance of the personal computer in those days -- it was a rare bird that would even consider it. yeah, right. I had a NEC APC with an 8086 at 4.9-something Mhz before the PC arrived in Oz with its 8088 at 4.77 Mhz. I had dual 960K floppies, 64K RAM. It was cheaper than the IBM PC when it did arrive, and a PC with equivalent storage would have required fixed disk and been twice the price. Serial ports on the APC could do async and sync comms. The standard display was _far_ better than IBM's CGA. IBM has made a business out of guaranteeing reliability, availability, and serviceability -- which is usually fundamentally incompatible with having There were problems with the PC floppies (10% failure rate if I recall correctly), and later with the AT 20 Mbyte fixed disks which I believe were recalled. the latest and greatest speeds and feeds (reliable/fast/cheap -- pick two). I buy IBM equipment for the instrumentation and ease of service, not for performance. That's always been the tradeoff -- buy HP/Compaq if you want the raw speed, but buy IBM if you want it to be manageable and reliable. I think the same applies here. Actually, the best PC I've seen, to work on, is from Dell;-). But then, I've not yet seen new kit from IBM or the others. -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
On Tue, Feb 18, 2003 at 06:47:25AM +0800, John Summerfield wrote: My point is you should not confuse the reliability of the software with the reliability of the hardware. PC crashes are rarely caused by hardware. I beg to differ. Unless you mean Crashes of computers running Windows are rarely caused by hardware. My desktop machine *usually* crashes, when it crashes, because of the hardware. Sometimes it's a cooling problem (luckily my system senses overtemperature and shuts itself down), sometimes it's an insufficiently conditioned power supply. Before that it was the flaky NVidia video card, which I eventually replaced. Adam
Re: URGENT! really low performance. A related question...
On Mon, 2003-02-17 at 16:33, Adam Thornton wrote: I beg to differ. Unless you mean Crashes of computers running Windows are rarely caused by hardware. My desktop machine *usually* crashes, when it crashes, because of the hardware. Sometimes it's a cooling problem (luckily my system senses overtemperature and shuts itself down), sometimes it's an insufficiently conditioned power supply. Before that it was the flaky NVidia video card, which I eventually replaced. Another point of interest (albeit an old point) is the case of the Pentium Pro. Intel billed and sold this processor as scalable to 4 way when the processor bus saturated at 2 way. When OSes, mainly windows, could not scale above 2 way the blame was laid firmly upon companies like Microsoft. Here's another one, remember the Aproximatium (catchy little phrase for the nasty math error that Intel shipped). Recent history documents more hardware errors than we could ever account for here. Each revision of processor, and it's respective implementation, has it's warts, they get fixed, they get bad press and they cause crashes at the desktop and in the data center.
Re: URGENT! really low performance. A related question...
On Mon, 17 Feb 2003, Adam Thornton wrote: On Tue, Feb 18, 2003 at 06:47:25AM +0800, John Summerfield wrote: My point is you should not confuse the reliability of the software with the reliability of the hardware. PC crashes are rarely caused by hardware. I beg to differ. Unless you mean Crashes of computers running Windows are rarely caused by hardware. My desktop machine *usually* crashes, when it crashes, because of the hardware. Sometimes it's a cooling problem (luckily my system senses overtemperature and shuts itself down), sometimes it's an insufficiently conditioned power supply. Before that it was the flaky NVidia video card, which I eventually replaced. Replace your faulty hardware. It's cheap. Or spend a bit more, and get a case without those cooling problems. -- Tzafrir Cohen mailto:[EMAIL PROTECTED] http://www.technion.ac.il/~tzafrir
Re: URGENT! really low performance. A related question...
On Monday 17 February 2003 05:38 pm, John Summerfield wrote: Let's be fair here -- the 8086 also required double the decoding logic and memory chips, which would have driven the cost of the PC and XT even higher than they were. Not double decoding logic, just an extra buffer for the other eight data bits. Addressing is still the same, essentially, and was 20 bits. Decoding logic for the 8086/88 was quirky but not overly complex. I'll grant your point on the RAM chip count, if you assume the machine would have had only one bank of eight chips. However, if the assumption is two banks of 8 bits versus one bank of 16 bits, then the chip count is the same except for one more 8-bit data bus buffer. Scott -- - Scott D. Courtney, Senior Engineer Sine Nomine Associates [EMAIL PROTECTED] http://www.sinenomine.net/
Re: URGENT! really low performance. A related question...
On Monday 17 February 2003 01:08 pm, Adam Thornton wrote: On Mon, Feb 17, 2003 at 04:08:48PM +0800, John Summerfield wrote: Linux is Linux. Don't confuse Windows' reliability with the reliability of IA32-based boxes. They can be built to be very reliable indeed, and even the cheapest PC clones today are much more reliable than mainframes of years gone by Yeah, but gone *way* by. Reliability in consumer-grade machines is pretty dreadful. Even high-end (e.g., Compaq and Dell server-grade machines) Intel boxen don't seem to have the kind of quality in connectors and cables that mainframes do. And for some reason the Intel world seems to like to put cheap cooling fans with poor bearings into even (for this arena) expensive machines. I've also seen less attention paid, in the Intel world, to issues like circuit board mounting rigidity, which can allow slight flexing of the board during initial assembly or component replacement. Chips rarely wear out, but board failures still happen. Why? Mechanical and thermal problems with the boards and the chassis environment. Microcracks in solder connections on boards. Vibration- induced failures of IC bondout pad welds. Static. And so on. I think it would be entirely possible to build an Intel machine that is as reliable as a zSeries. It would end up costing just about the same, because most of the cost isn't in the CPU chip itself. Sometimes you do, in fact, get what you pay for. Scott -- - Scott D. Courtney, Senior Engineer Sine Nomine Associates [EMAIL PROTECTED] http://www.sinenomine.net/
Re: URGENT! really low performance. A related question...
On Tue, Feb 18, 2003 at 02:36:21AM +0200, Tzafrir Cohen wrote: Replace your faulty hardware. It's cheap. Or spend a bit more, and get a case without those cooling problems. Yes, of course it's cheap. 'S'why I bought it. And I'll buy a new machine eventually, at a similarly low price point, because I'm cheap. Point is, *most* PC hardware is cheap. Because it, you know, costs less that way. Adam
Re: URGENT! really low performance. A related question...
There's also the fact that your cheapo-cheapo PC has one processor and has to do all the I/O for itself. The PC's processor spends 90% of its time handli ng I/O, formatting data for some port or the screen, running a driver program , polling and waiting for a response from some peripheral and so on. I don't pretend that my Athlon-based system's overall design is anything like as good as the S370/168 I used to use so many years ago, but fair go. My PC has the on-board EIDE interfaces (EIDE{0,1}) and additionally, an add-on PCI card providing two more EIDE ports. At one time I had three drives in the box on each of three interfaces. I was running DD to do a disk-to-disk copy, and while it was running, I used hdparm to test the speed of the third drive. It tested at 35 Mbytes/sec, pretty close to its rated speed. My graphics card has its own processor, and if I add a SCSI card that too offloads a decent amount of work. Devices use interrupts to signal the end of operations, and many use DMA devices to provide direct access to system RAM. While IBM's mainframes do all these things better (except compute), if an IA32 system uses more than about five percent of the CPU power to drive devices, the OS is broken. On Linux, we use (mostly) the same software you do. It does not need lots of CPU power to drive most I/O devices. -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Note: mail delivered to me is deemed to be intended for me, for my disposition. == If you don't like being told you're wrong, be right!
Re: URGENT! really low performance. A related question...
But the question still does not appear to be answered - why does the mainframe have to run at such a low clock speed? Perhaps someone with some hardware knowledge could explain it? Why can't the clock be cranked up to be the same speed as the latest Pentium? Most of us mainframe guys understand its inherent advantages, but as someone has already commented, it often just doesn't wash with management if a cheap Pentium outperforms a million-dollar mainframe. Regards. Mark Darvodelsky Data Centre - Mainframe Facilities Royal SunAlliance Australia Phone: +61-2-99789081 Email: [EMAIL PROTECTED] CAUTION - This message is intended for the addressee named above It may contain privileged or confidential information. If you are not the intended recipient of this message you must not use, copy, distribute or disclose it to anyone other than the addressee. If you have received this message in error please return the message to the sender by replying to it and then delete the message from your computer. Internet emails are not necessarily secure. Royal SunAlliance does not accept responsibility for changes made to this message after it was sent
Re: URGENT! really low performance. A related question...
On Sun, 2003-02-16 at 17:32, Mark Darvodelsky wrote: But the question still does not appear to be answered - why does the mainframe have to run at such a low clock speed? Perhaps someone with some hardware knowledge could explain it? Why can't the clock be cranked up to be the same speed as the latest Pentium? This has everything to do with heat dissipation and the media capacity of the processors themselves. Does IBM have the capability to make processors that will run faster, yes. Will they, not without due overcompensation. Look at the history, the AT was running at 6Mhz while every other AT clone manufacturer was running at 8 and 12 - the same went for all of the IBM x86 boxes made. Most of us mainframe guys understand its inherent advantages, but as someone has already commented, it often just doesn't wash with management if a cheap Pentium outperforms a million-dollar mainframe. Convert your favorite CICS app to the Windows world, connect 25000 concurrent user sessions and watch the clock - then come back and tell us how long the Intel box(ES) stayed alive under that realistic load. It boils down to this, at the end of the day the mainframe is still running when the Intel units have had to be rebooted multiple time. This goes without stating that the number of Intel machines it would take to replace that big chunk of iron would cost just as much in hardware and require at least 4 times the support layer to keep the monster alive. TCO rules here. All of this from someone that has spent most of the last 20 years on micro and mid-range machines, interesting perspective huh.
Re: URGENT! really low performance. A related question...
Thank You Steven 35yrs here and have worked on too many platforms to dismiss any of them for there suited needs--but management has been duped for so long and they have been blinded by there training and have not--most--worked in any other sector of the industry nor do they have REAL experience . Most are $ minded and then again are not ... If you catch the meaning and I gather you will .
Re: URGENT! really low performance. A related question...
On Sun, 2003-02-16 at 19:17, Ronald Wells wrote: Thank You Steven 35yrs here and have worked on too many platforms to dismiss any of them for there suited needs--but management has been duped for so long and they have been blinded by there training and have not--most--worked in any other sector of the industry nor do they have REAL experience . Most are $ minded and then again are not ... If you catch the meaning and I gather you will . We used to call that tripping over dollars to pick up dimes.
Re: URGENT! really low performance. A related question...
A related question for IBM about all this: I can go to WalMart and by a 2gHz processor for under $500. Or I can spend hundreds of thousands of dollars for a mainframe with several processors... But, why are the mainframe processors so bloody slow??? If Intel can push up the speed, from 700mHz only about 3 years ago to 2000mHz, is there any reason why a 9672 or z-series processor has to be sooo slow? Speeding up the mainframe machines to at least match the toy machines would really make our jobs a lot easier when we're trying to sell the mainframe concept. And maybe we wouldn't need a five engine box if the engines shuffled along at a bit faster pace... What's a CPU cost for a z-series? And it can't keep up with the toy on my desk? Something's not quite right with that concept... Robert P. Nixinternet: [EMAIL PROTECTED] Mayo Clinic phone: 507-284-0844 RO-CE-8-857page: 507-270-1182 200 First St. SW Rochester, MN 55905 Codito, Ergo Sum In theory, theory and practice are the same, but in practice, theory and practice are different. -Original Message- From: Alex Leyva [SMTP:[EMAIL PROTECTED]] Sent: Friday, February 14, 2003 9:41 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. I've heard about vector facilities, i really dont know much about it, only that they are designed to provide help with arithmetic operations, and things like that, maybe that could help with cpu bound task? On the other hand the idea of clustering mainframes with intels could help with that tasks, or maybe its only my brain telling me that i need to sleep :-( On Fri, 14 Feb 2003, Joseph Temple wrote: Robert Nix wrote: But, if one image starts doing compiles or compression of large quantities of data, or any other CPU bound task, everyone will suffer. Actually you have a choice. If the compiles, etc. are relegated to a compute server you can make it suffer rather than everyone else, also, if you cap the cpu given the guests you can minimize the intensity of t the suffering when cpu heavy tasks occur, but it will go on for a longer period of time. It's a matter of prioities and how you distribute work among virtual machines. The beauty of Linux is that the compute intense server can be a virtual or real machine, but it is still LInux.In the past such a scheme using reeal machines would split the work between ZOS and WIndows which is a lot more complex. We need to start thinking about things like Grids of virtual and real servers. Joe Temple [EMAIL PROTECTED] 845-435-6301 Nix, Robert P. Nix.Robert@mayo.To: [EMAIL PROTECTED] edu cc: Sent by: Linux onSubject: Re: URGENT! really low performance. 390 Port [EMAIL PROTECTED] IST.EDU 02/13/2003 04:01 PM Please respond to Linux on 390 Port Mainframes do I/O exceptionally well, but when it comes to compute bound tasks, they do very poorly. If you think about a tar operation, the compression is a fairly compute-intensive operation. We're running a 9672-R56 w/ one IFL. During our initial trial, we found the IFL to be about the same as a 300 or 400mHz PC for compute-bound tasks. The strength of the mainframe comes in for burst-type execution and I/O throughput. Things like multiple web servers running in individual Linux images. File serving. Anything where: A) The CPU isn't expected to be taxed a great deal. and B) the CPU isn't going to be utilized for long periods of time. This allows the CPU to be shared among a larger quantity of images, giving all of them the impression of a dedicated box. But, if one image starts doing compiles or compression of large quantities of data, or any other CPU bound task, everyone will suffer. Robert P. Nixinternet: [EMAIL PROTECTED] Mayo Clinic phone: 507-284-0844 RO-CE-8-857page: 507-270-1182 200 First St. SW Rochester, MN 55905 Codito, Ergo Sum In theory, theory and practice are the same, but in practice, theory and practice are different. -Original Message- From: Alex Leyva [SMTP:[EMAIL PROTECTED]] Sent: Thursday, February 13, 2003 3:10 PM To: [EMAIL PROTECTED] Subject: URGENT! really low performance. Hi all, i have a problem, we have a z800, the configuration is: 1 cp 80 MIPS 1 IFL 8 Gb storage 3 partitions: -os/390 2.6 -os/390 2.6 -z/vm 4.3 840 gb (shark) the cp is dedicated
Re: URGENT! really low performance. A related question...
Speeding up the mainframe machines to at least match the toy machines would really make our jobs a lot easier when we're trying to sell the mainframe concept. I think you're trying to sell the wrong thing. The first time I hit this was back in the mid-1970s. We'd designed a mainframe IMS database to run FORTRAN transactions against time series economic data for financial modelling, and justified a 370/158 as the host. Our capacity plan gave us a staged growth pattern and upgrades were planned. All of a sudden our curve died and CPU usage plummeted - so we convened a meeting. It turned out they'd bought a raft of Hewlett-Packard technical calculators and were running their what-ifs on those. When they got close, they'd go back to the mainframe. They could each load their personal 3KB or so of data and play for hours. These were the early LED display devices, so you HAD to have the mains power plugged in! It's always been the way. Mainframes have NEVER stacked up as cheap sources of compute power, and were only used for that purpose when the problem was too big for any other approach. You have to concentrate on the mainframe's unique selling propositions. In the Linux world, for instance, the speed with which a new server can be created and the ease with which it can be managed. Show that as a cost-of-ownership advantage, and the comparatively huge extra cost of mainframe MIPS is so small as an absolute quantity that it almost gets lost in the rounding errors. But get yourself cornered into instructions-per-transaction or some other wholly artificial benchmark and you've lost before you begin. -- Phil Payne http://www.isham-research.com +44 7785 302 803 +49 173 6242039
Re: URGENT! really low performance. A related question...
There's also the fact that your cheapo-cheapo PC has one processor and has to do all the I/O for itself. The PC's processor spends 90% of its time handling I/O, formatting data for some port or the screen, running a driver program, polling and waiting for a response from some peripheral and so on. Mainframes hand the I/O off to the I/O subsystem processor, which hands it off to the channel processors (Last I heard, an ESCON channel used the same processor chip as the Macintosh, but that's been a while) which hands it off to the controller for the device. You've got a lot of processors working for you, and everything's cached along the way so you may not even be doing any real I/O half the time. The point is, the central processor has very little to do with any I/O processing. Someone once told me that my 9672-R36 with three processors at 117 mips each should, with all the I/O processors, actually be rated at around 30,000 mips. But that 30,000 is for I/O only, the other 351 mips are for computing only. Use the right tool for the job at hand. Don't try to use a pair of pliers for a wrench. They say there are three signs of stress in your life. You eat too much junk food, you drive too fast and you veg out in front of the TV. Who are they kidding? That sounds like a perfect day to me! Gordon Wolfe, Ph.D. (425)865-5940 VM Linux Servers and Storage, The Boeing Company -- From: Phil Payne Reply To: Linux on 390 Port Sent: Friday, February 14, 2003 7:59 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... Speeding up the mainframe machines to at least match the toy machines would really make our jobs a lot easier when we're trying to sell the mainframe concept. I think you're trying to sell the wrong thing. The first time I hit this was back in the mid-1970s. We'd designed a mainframe IMS database to run FORTRAN transactions against time series economic data for financial modelling, and justified a 370/158 as the host. Our capacity plan gave us a staged growth pattern and upgrades were planned. All of a sudden our curve died and CPU usage plummeted - so we convened a meeting. It turned out they'd bought a raft of Hewlett-Packard technical calculators and were running their what-ifs on those. When they got close, they'd go back to the mainframe. They could each load their personal 3KB or so of data and play for hours. These were the early LED display devices, so you HAD to have the mains power plugged in! It's always been the way. Mainframes have NEVER stacked up as cheap sources of compute power, and were only used for that purpose when the problem was too big for any other approach. You have to concentrate on the mainframe's unique selling propositions. In the Linux world, for instance, the speed with which a new server can be created and the ease with which it can be managed. Show that as a cost-of-ownership advantage, and the comparatively huge extra cost of mainframe MIPS is so small as an absolute quantity that it almost gets lost in the rounding errors. But get yourself cornered into instructions-per-transaction or some other wholly artificial benchmark and you've lost before you begin. -- Phil Payne http://www.isham-research.com +44 7785 302 803 +49 173 6242039
Re: URGENT! really low performance. A related question...
Thanks for this Gordon. And thanks to each person participating in this discussion. I appreciate the GPL-ish open source approach everyone here takes in dispensing knowledge. I've known about the differences between s390 arch and pc arch task wise (there is a RedPiece or paper on the subject I believe), and have always been a little leery when in sell mode because I felt that a rack of blade servers could handle graphics rendering and the like better than the s390. Couple that with a belief that web apps are generally headed for more and more complex visuals and you can picture my worry. But, I have no problem touting the strong points of the s390/VM/Linux combo that others have already mentioned, it's just that in the back of my mind I wonder if we will we still be competitive 3 or 5 years down the road. Anyway, thanks for the intelligent talk, lively discussion and brilliant summarized synopsis/synopses in the realm of s390 vs. others. Matt Lashley Idaho State Controller's Office Wolfe, Gordon W gordon.w.wolfe@bTo: [EMAIL PROTECTED] oeing.com cc: Sent by: Linux onSubject: Re: URGENT! really low performance. A related question... 390 Port [EMAIL PROTECTED] IST.EDU 02/14/2003 10:16 AM Please respond to Linux on 390 Port There's also the fact that your cheapo-cheapo PC has one processor and has to do all the I/O for itself. The PC's processor spends 90% of its time handling I/O, formatting data for some port or the screen, running a driver program, polling and waiting for a response from some peripheral and so on. Mainframes hand the I/O off to the I/O subsystem processor, which hands it off to the channel processors (Last I heard, an ESCON channel used the same processor chip as the Macintosh, but that's been a while) which hands it off to the controller for the device. You've got a lot of processors working for you, and everything's cached along the way so you may not even be doing any real I/O half the time. The point is, the central processor has very little to do with any I/O processing. Someone once told me that my 9672-R36 with three processors at 117 mips each should, with all the I/O processors, actually be rated at around 30,000 mips. But that 30,000 is for I/O only, the other 351 mips are for computing only. Use the right tool for the job at hand. Don't try to use a pair of pliers for a wrench. They say there are three signs of stress in your life. You eat too much junk food, you drive too fast and you veg out in front of the TV. Who are they kidding? That sounds like a perfect day to me! Gordon Wolfe, Ph.D. (425)865-5940 VM Linux Servers and Storage, The Boeing Company
Re: URGENT! really low performance. A related question...
I've known about the differences between s390 arch and pc arch task wise (there is a RedPiece or paper on the subject I believe), and have always been a little leery when in sell mode because I felt that a rack of blade servers could handle graphics rendering and the like better than the s390. And you are absolutely correct on that feeling. That's one of the reasons why grid computing is so important -- the idea of applications residing only on one platform is one that limits a lot of interesting applications. Couple that with a belief that web apps are generally headed for more and more complex visuals and you can picture my worry. See above. But, I have no problem touting the strong points of the s390/VM/Linux combo that others have already mentioned, it's just that in the back of my mind I wonder if we will we still be competitive 3 or 5 years down the road. For rendering, it's already a lost cause. 390 is not in that market -- price per MIPS is just not good enough. However focusing the application on right tool, right job (raw MIPS on cheap Intel, storage management and I/O optimization on 390 or similar environments) and writing deliberately for that environment is where we have something to say. Best of both worlds. That's the story we have to tell. -- db
Re: URGENT! really low performance. A related question...
When IBM first approached us about Linux/390 and an IFL, one of the first applications mentioned was print serving. Should be a fairly I/O bound task with lots of free time, right? Well we found out that on our print servers, serving our 15,000 printers, there's very little idle time to be had, making print serving a completely compute-bound limited task. So the comparison between the current print servers and Linux/390 was a disaster, and the Unix people here never went any further. The whole trial died on the vine, at least for them, right at the first print server test. In any case, my point is, why do the mainframe CPUs *have* to be soo slow? Why can't they be beefed up to the point that they're at least ball park competitive, so that things like our trial don't happen? Why can't they be beefed up so that instead of having to buy a five way processor to do our work, we could get a two or three way, and spend less cash? If the separation of CPU and I/O computing is so great, then wouldn't it just be greater if the CPU portion could keep up with a PC? Or even see the PC's tail at the end of the race? Is separation of CPU and I/O processing really that important, when the PC toys can do both computing and I/O in their single CPU, faster than we can on our separated computing and I/O CPUs? I'm having a really hard time selling the concept to people here. You say that the PC spends 90% of its CPU time on I/O tasks If that's really true, then we're really in trouble, because it spends only 10% of its CPU power on the task at hand, and still has double the throughput of a single-IFL mainframe when both are dedicated to serving printers. And that is the statistic that we're trying to fight against here. I know the whole I/O is separated story; But I'm just tired of being laughed at by the Intel-minded people in the Unix and NT world here. Robert P. Nixinternet: [EMAIL PROTECTED] Mayo Clinic phone: 507-284-0844 RO-CE-8-857page: 507-270-1182 200 First St. SW Rochester, MN 55905 Codito, Ergo Sum In theory, theory and practice are the same, but in practice, theory and practice are different. -Original Message- From: Wolfe, Gordon W [SMTP:[EMAIL PROTECTED]] Sent: Friday, February 14, 2003 11:16 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... There's also the fact that your cheapo-cheapo PC has one processor and has to do all the I/O for itself. The PC's processor spends 90% of its time handling I/O, formatting data for some port or the screen, running a driver program, polling and waiting for a response from some peripheral and so on. Mainframes hand the I/O off to the I/O subsystem processor, which hands it off to the channel processors (Last I heard, an ESCON channel used the same processor chip as the Macintosh, but that's been a while) which hands it off to the controller for the device. You've got a lot of processors working for you, and everything's cached along the way so you may not even be doing any real I/O half the time. The point is, the central processor has very little to do with any I/O processing. Someone once told me that my 9672-R36 with three processors at 117 mips each should, with all the I/O processors, actually be rated at around 30,000 mips. But that 30,000 is for I/O only, the other 351 mips are for computing only. Use the right tool for the job at hand. Don't try to use a pair of pliers for a wrench. They say there are three signs of stress in your life. You eat too much junk food, you drive too fast and you veg out in front of the TV. Who are they kidding? That sounds like a perfect day to me! Gordon Wolfe, Ph.D. (425)865-5940 VM Linux Servers and Storage, The Boeing Company -- From: Phil Payne Reply To: Linux on 390 Port Sent: Friday, February 14, 2003 7:59 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... Speeding up the mainframe machines to at least match the toy machines would really make our jobs a lot easier when we're trying to sell the mainframe concept. I think you're trying to sell the wrong thing. The first time I hit this was back in the mid-1970s. We'd designed a mainframe IMS database to run FORTRAN transactions against time series economic data for financial modelling, and justified a 370/158 as the host. Our capacity plan gave us a staged growth pattern and upgrades were planned. All of a sudden our curve died and CPU usage plummeted - so we convened a meeting. It turned out they'd bought a raft of Hewlett-Packard technical calculators and were running their what-ifs on those. When they got close, they'd go back to the mainframe. They could each load their personal 3KB or so of data and play for hours
Re: URGENT! really low performance. A related question...
Robert, you are buying into the thought that the PC is mans best friend in the Business world. Let me ask you, how many times during the day do your PC users re-boot ? then tell us how many times you IPLED your s390 system, (that crashed and burned, not because of scheduled changes) in the last year ??? And then tell us how many users your s390 supports daily, without a single complaint No Blue screens of death.. sorry, the PC folks just don't get it and until they experience in person a Live well mainframe...they won't get it in their minds the PC runs circles around the Mainframe. And in some cases it does !!, but for the 99.99 other % of the work the mainframe is the Energizer Bunny !! Just had to get that off my chest, since it is Friday. Ken Dreger At 03:32 PM 2/14/2003 -0600, you wrote: When IBM first approached us about Linux/390 and an IFL, one of the first applications mentioned was print serving. Should be a fairly I/O bound task with lots of free time, right? Well we found out that on our print servers, serving our 15,000 printers, there's very little idle time to be had, making print serving a completely compute-bound limited task. So the comparison between the current print servers and Linux/390 was a disaster, and the Unix people here never went any further. The whole trial died on the vine, at least for them, right at the first print server test. In any case, my point is, why do the mainframe CPUs *have* to be soo slow? Why can't they be beefed up to the point that they're at least ball park competitive, so that things like our trial don't happen? Why can't they be beefed up so that instead of having to buy a five way processor to do our work, we could get a two or three way, and spend less cash? If the separation of CPU and I/O computing is so great, then wouldn't it just be greater if the CPU portion could keep up with a PC? Or even see the PC's tail at the end of the race? Is separation of CPU and I/O processing really that important, when the PC toys can do both computing and I/O in their single CPU, faster than we can on our separated computing and I/O CPUs? I'm having a really hard time selling the concept to people here. You say that the PC spends 90% of its CPU time on I/O tasks If that's really true, then we're really in trouble, because it spends only 10% of its CPU power on the task at hand, and still has double the throughput of a single-IFL mainframe when both are dedicated to serving printers. And that is the statistic that we're trying to fight against here. I know the whole I/O is separated story; But I'm just tired of being laughed at by the Intel-minded people in the Unix and NT world here. Robert P. Nixinternet: [EMAIL PROTECTED] Mayo Clinic phone: 507-284-0844 RO-CE-8-857page: 507-270-1182 200 First St. SW Rochester, MN 55905 Codito, Ergo Sum In theory, theory and practice are the same, but in practice, theory and practice are different. -Original Message- From: Wolfe, Gordon W [SMTP:[EMAIL PROTECTED]] Sent: Friday, February 14, 2003 11:16 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... There's also the fact that your cheapo-cheapo PC has one processor and has to do all the I/O for itself. The PC's processor spends 90% of its time handling I/O, formatting data for some port or the screen, running a driver program, polling and waiting for a response from some peripheral and so on. Mainframes hand the I/O off to the I/O subsystem processor, which hands it off to the channel processors (Last I heard, an ESCON channel used the same processor chip as the Macintosh, but that's been a while) which hands it off to the controller for the device. You've got a lot of processors working for you, and everything's cached along the way so you may not even be doing any real I/O half the time. The point is, the central processor has very little to do with any I/O processing. Someone once told me that my 9672-R36 with three processors at 117 mips each should, with all the I/O processors, actually be rated at around 30,000 mips. But that 30,000 is for I/O only, the other 351 mips are for computing only. Use the right tool for the job at hand. Don't try to use a pair of pliers for a wrench. They say there are three signs of stress in your life. You eat too much junk food, you drive too fast and you veg out in front of the TV. Who are they kidding? That sounds like a perfect day to me! Gordon Wolfe, Ph.D. (425)865-5940 VM Linux Servers and Storage, The Boeing Company -- From: Phil Payne Reply To: Linux on 390 Port Sent: Friday, February 14, 2003 7:59 AM To: [EMAIL PROTECTED] Subject: Re: URGENT! really low performance. A related question... Speeding up the mainframe machines to at least