Re: Perpetuating Myths about the zSeries
On Thu, 6 Nov 2003, Jim Sibley wrote: Date: Thu, 6 Nov 2003 10:53:00 -0800 From: Jim Sibley [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries Well, IBM has painted the zSeries black, put a cool copper reflective strip on it, and changed the door locks! The external cables are orange for ESCON and bright yellow for FICON. The only problem is that they now look exactly like the new pSeries boxes! I was at an IBM show a few months ago, and when I lamented the lack of a zBox I was assured, They look just like that pBox there. -- Cheers John. Join the Linux Support by Small Businesses list at http://mail.computerdatasafe.com.au/mailman/listinfo/lssb Copyright John Summerfield. Reproduction prohibited.
Re: Perpetuating Myths about the zSeries
What I loved about war games was that the hacker's computer was somehow autodialling with a ACOUSTIC coupled modem. I distinctly remember seeing the handset of the phone in the acoustic modem. Oh, and the phone was PULSE dialled, not TONE dialed, to boot. -- John McKown Senior Systems Programmer UICI Insurance Center Applications Solutions Team +1.817.255.3225 This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its' content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited. -Original Message- From: Edwin Handschuh [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 05, 2003 5:18 PM To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries Adam: I like where you're going with this idea... People's perception of a computer are completely out of line with reality. I know this is a bit off track, but it does remind me of the movie War Games. I distinctly remember a chubby guy (computer geek) sticking his head in a 3420 tape drive (vacuum door opened) and saying I've checked the computer and I can't find the bug anywhere. Well, maybe if you got your head out of the tape drive and logged on to the machine you might stand a chance! Clearly someone in Hollywood thought the 3420 looked more like a computer than the actual machine (not that the WOPR really looked like one either). Come to think of it, didn't the WOPR have a lot of blinking lights and possibly bubble tubes? ETH
Re: Perpetuating Myths about the zSeries
A number of years ago I worked on a project that involved donating computers and software to police departments, so we had a lot of press coverage. The computer was an old mini that was one of the first ones without a front panel. Just a plain beige metal box with a couple of big clunky disk drives attached. No tape drives. We did backups disk-to-disk. The TV crews were beside themselves that there was nothing interesting to show. One crew came into our office early in the process, and ended up filming a couple of pieces that had nothing to do with the project, but had blinking lights and spinning tapes. For a while we considered replacing the plain metal panels on the mini with smoked plexiglass in chrome frames to show the diagnostic lights inside, but nobody would go for the idea. I think IBM is aware of this to a degree. The new CMOS boxes are a lot cooler looking than, say, a 4381, but maybe something like a lava-lamp SAD display on an LCD display on the front would be nice? Actually, there's an X app that displays Linux system activity as Lava-lamp like blobs, so this wouldn't be that hard to do. -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] Behalf Of Edwin Handschuh Sent: Wednesday, November 05, 2003 6:18 PM To: [EMAIL PROTECTED] Subject: Re: [LINUX-390] Perpetuating Myths about the zSeries Adam: I like where you're going with this idea... People's perception of a computer are completely out of line with reality. I know this is a bit off track, but it does remind me of the movie War Games. I distinctly remember a chubby guy (computer geek) sticking his head in a 3420 tape drive (vacuum door opened) and saying I've checked the computer and I can't find the bug anywhere. Well, maybe if you got your head out of the tape drive and logged on to the machine you might stand a chance! Clearly someone in Hollywood thought the 3420 looked more like a computer than the actual machine (not that the WOPR really looked like one either). Come to think of it, didn't the WOPR have a lot of blinking lights and possibly bubble tubes? ETH PS: Chubby did have a pocket protector with pencils/pens in it... clearly another stereotype of us computer geeks! ---SNIP--- front; all *real* high-powered computers have glowing bubble columns.
Re: Perpetuating Myths about the zSeries
John: Touché'. I somehow missed that one. I do remember the acoustic coupler, but I wasn't quick enough to pick up on the tone dial and put two and two together. Its all part of the Hollywood fantasy. As one great actor said: Movies is magic. ETH ---SNIP--- autodialling with a ACOUSTIC coupled modem. I distinctly remember seeing the handset of the phone in the acoustic modem. Oh, and the phone was PULSE dialled, not TONE dialed, to boot.
Re: Perpetuating Myths about the zSeries
Wasnt there somone who twiddle 'doom' so that the 'monsters' were processes and used it to manage the system? I vaguely recall hearing about that. |-+ | | Hall, Ken (IDS | | | ECCS) | | | [EMAIL PROTECTED]| | | Sent by: Linux on| | | 390 Port | | | [EMAIL PROTECTED]| | | IST.EDU | | || | || | | 11/06/2003 07:16 | | | AM | | | Please respond to| | | Linux on 390 Port| | || |-+ --| | | | To: [EMAIL PROTECTED] | | cc: | | Subject: Re: Perpetuating Myths about the zSeries | --| A number of years ago I worked on a project that involved donating computers and software to police departments, so we had a lot of press coverage. The computer was an old mini that was one of the first ones without a front panel. Just a plain beige metal box with a couple of big clunky disk drives attached. No tape drives. We did backups disk-to-disk. The TV crews were beside themselves that there was nothing interesting to show. One crew came into our office early in the process, and ended up filming a couple of pieces that had nothing to do with the project, but had blinking lights and spinning tapes. For a while we considered replacing the plain metal panels on the mini with smoked plexiglass in chrome frames to show the diagnostic lights inside, but nobody would go for the idea. I think IBM is aware of this to a degree. The new CMOS boxes are a lot cooler looking than, say, a 4381, but maybe something like a lava-lamp SAD display on an LCD display on the front would be nice? Actually, there's an X app that displays Linux system activity as Lava-lamp like blobs, so this wouldn't be that hard to do. -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] Behalf Of Edwin Handschuh Sent: Wednesday, November 05, 2003 6:18 PM To: [EMAIL PROTECTED] Subject: Re: [LINUX-390] Perpetuating Myths about the zSeries Adam: I like where you're going with this idea... People's perception of a computer are completely out of line with reality. I know this is a bit off track, but it does remind me of the movie War Games. I distinctly remember a chubby guy (computer geek) sticking his head in a 3420 tape drive (vacuum door opened) and saying I've checked the computer and I can't find the bug anywhere. Well, maybe if you got your head out of the tape drive and logged on to the machine you might stand a chance! Clearly someone in Hollywood thought the 3420 looked more like a computer than the actual machine (not that the WOPR really looked like one either). Come to think of it, didn't the WOPR have a lot of blinking lights and possibly bubble tubes? ETH PS: Chubby did have a pocket protector with pencils/pens in it... clearly another stereotype of us computer geeks! ---SNIP--- front; all *real* high-powered computers have glowing bubble columns.
Re: Perpetuating Myths about the zSeries
Yes, I just ran accross this a couple of weeks back. Here be the link http://www.cs.unm.edu/~dlchao/flake/doom/ -Original Message- From: James Melin [mailto:[EMAIL PROTECTED] Sent: Thursday, November 06, 2003 8:46 AM To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries Wasnt there somone who twiddle 'doom' so that the 'monsters' were processes and used it to manage the system? I vaguely recall hearing about that. |-+ | | Hall, Ken (IDS | | | ECCS) | | | [EMAIL PROTECTED]| | | Sent by: Linux on| | | 390 Port | | | [EMAIL PROTECTED]| | | IST.EDU | | || | || | | 11/06/2003 07:16 | | | AM | | | Please respond to| | | Linux on 390 Port| | || |-+ - -| | | | To: [EMAIL PROTECTED] | | cc: | | Subject: Re: Perpetuating Myths about the zSeries | - -| A number of years ago I worked on a project that involved donating computers and software to police departments, so we had a lot of press coverage. The computer was an old mini that was one of the first ones without a front panel. Just a plain beige metal box with a couple of big clunky disk drives attached. No tape drives. We did backups disk-to-disk. The TV crews were beside themselves that there was nothing interesting to show. One crew came into our office early in the process, and ended up filming a couple of pieces that had nothing to do with the project, but had blinking lights and spinning tapes. For a while we considered replacing the plain metal panels on the mini with smoked plexiglass in chrome frames to show the diagnostic lights inside, but nobody would go for the idea. I think IBM is aware of this to a degree. The new CMOS boxes are a lot cooler looking than, say, a 4381, but maybe something like a lava-lamp SAD display on an LCD display on the front would be nice? Actually, there's an X app that displays Linux system activity as Lava-lamp like blobs, so this wouldn't be that hard to do. -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] Behalf Of Edwin Handschuh Sent: Wednesday, November 05, 2003 6:18 PM To: [EMAIL PROTECTED] Subject: Re: [LINUX-390] Perpetuating Myths about the zSeries Adam: I like where you're going with this idea... People's perception of a computer are completely out of line with reality. I know this is a bit off track, but it does remind me of the movie War Games. I distinctly remember a chubby guy (computer geek) sticking his head in a 3420 tape drive (vacuum door opened) and saying I've checked the computer and I can't find the bug anywhere. Well, maybe if you got your head out of the tape drive and logged on to the machine you might stand a chance! Clearly someone in Hollywood thought the 3420 looked more like a computer than the actual machine (not that the WOPR really looked like one either). Come to think of it, didn't the WOPR have a lot of blinking lights and possibly bubble tubes? ETH PS: Chubby did have a pocket protector with pencils/pens in it... clearly another stereotype of us computer geeks! ---SNIP--- front; all *real* high-powered computers have glowing bubble columns.
Re: Perpetuating Myths about the zSeries
On Thu, 2003-11-06 at 07:44, McKown, John wrote: What I loved about war games was that the hacker's computer was somehow autodialling with a ACOUSTIC coupled modem. I distinctly remember seeing the handset of the phone in the acoustic modem. Oh, and the phone was PULSE dialled, not TONE dialed, to boot. Just to make sure the point is clear: my first modem *could* do autodialling, without tone capability. It obviously was not acoustic-coupled, but it *did* do pulse dialling. Still does, in fact. It's right over here: a Hayes Micromodem 300. It's a Hayes modem that long predates the Hayes command set. Adam
Re: Perpetuating Myths about the zSeries
On Thu, 2003-11-06 at 08:45, James Melin wrote: Wasnt there somone who twiddle 'doom' so that the 'monsters' were processes and used it to manage the system? I vaguely recall hearing about that. Yeah. It makes process killing pretty splendidly interactive. Adam
Re: Perpetuating Myths about the zSeries
for the curious - http://www.cs.unm.edu/~dlchao/flake/doom/ f At 09:10 AM 11/6/2003 -0600, you wrote: On Thu, 2003-11-06 at 08:45, James Melin wrote: Wasnt there somone who twiddle 'doom' so that the 'monsters' were processes and used it to manage the system? I vaguely recall hearing about that. Yeah. It makes process killing pretty splendidly interactive. Adam Cole Software LLC www.colesoft.com Phone : 540.456.6164 Fax : 540.456.6658 Email : [EMAIL PROTECTED]
Re: Perpetuating Myths about the zSeries
On Wednesday, 11/05/2003 at 04:53 CST, Adam Thornton [EMAIL PROTECTED] wrote: Finally, everyone *knows* it's not a high-performance machine unless it's liquid-cooled. Get with the program, guys! Go check your z990 specs! It has on-board refrigeration with fan back-up. :-) (z900 may have it , too...i don't remember) Alan Altmark Sr. Software Engineer IBM z/VM Development
Perpetuating Myths about the zSeries
Ref: Your note of 6 November 2003, 12:18:28 -0500 (attached) The z900 does also, and the G6. I'm not sure about the G5.. The z800 is all air cooled. (My lawnmower is liquid cooled. Does that mean it is high performance too? ;-) Bruce Hayden IBM Global Services - Note follows -- Date: 6 November 2003, 12:18:28 -0500 From: Alan Altmark Subject: Re: Perpetuating Myths about the zSeries Go check your z990 specs! It has on-board refrigeration with fan back-up. :-) (z900 may have it , too...i don't remember) Alan Altmark Sr. Software Engineer IBM z/VM Development
Re: Perpetuating Myths about the zSeries
On Thursday, 11/06/2003 at 12:22 EST, Bruce Hayden [EMAIL PROTECTED] wrote: The z900 does also, and the G6. I'm not sure about the G5.. The z800 is all air cooled. (My lawnmower is liquid cooled. Does that mean it is high performance too? ;-) I've seen your lawnmower. That would be a big No. Alan Altmark Sr. Software Engineer IBM z/VM Development
Re: Perpetuating Myths about the zSeries
Well, IBM has painted the zSeries black, put a cool copper reflective strip on it, and changed the door locks! The external cables are orange for ESCON and bright yellow for FICON. The only problem is that they now look exactly like the new pSeries boxes! = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree
Re: Perpetuating Myths about the zSeries
Okay, now add a slave display from the Service Element to the front door, and put up a cool screensaver, such as [EMAIL PROTECTED], and you've got something. Besides, it would give the operator/technician something to look at when things go wrong, so he doesn't have to open the back door. Especially important if he doesn't have one of those new keys for the new locks. -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] Behalf Of Jim Sibley Sent: Thursday, November 06, 2003 1:53 PM To: [EMAIL PROTECTED] Subject: Re: [LINUX-390] Perpetuating Myths about the zSeries Well, IBM has painted the zSeries black, put a cool copper reflective strip on it, and changed the door locks! The external cables are orange for ESCON and bright yellow for FICON. The only problem is that they now look exactly like the new pSeries boxes! = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree
Re: Perpetuating Myths about the zSeries
Hello (again) from Gregg C Levine Hmm.. What I loved about the film, was the kid's computer. It was supposed to be an S-100 based unit, one of the IMSAI jobs. And naturally the modem as well. Come to think of it, that annoying habit of dialing for modems, started around then. But your right John. --- Gregg C Levine [EMAIL PROTECTED] The Force will be with you...Always. Obi-Wan Kenobi Use the Force, Luke. Obi-Wan Kenobi (This company dedicates this E-Mail to General Obi-Wan Kenobi ) (This company dedicates this E-Mail to Master Yoda ) -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of McKown, John Sent: Thursday, November 06, 2003 8:44 AM To: [EMAIL PROTECTED] Subject: Re: [LINUX-390] Perpetuating Myths about the zSeries What I loved about war games was that the hacker's computer was somehow autodialling with a ACOUSTIC coupled modem. I distinctly remember seeing the handset of the phone in the acoustic modem. Oh, and the phone was PULSE dialled, not TONE dialed, to boot. -- John McKown Senior Systems Programmer UICI Insurance Center Applications Solutions Team +1.817.255.3225 This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its' content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited. -Original Message- From: Edwin Handschuh [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 05, 2003 5:18 PM To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries Adam: I like where you're going with this idea... People's perception of a computer are completely out of line with reality. I know this is a bit off track, but it does remind me of the movie War Games. I distinctly remember a chubby guy (computer geek) sticking his head in a 3420 tape drive (vacuum door opened) and saying I've checked the computer and I can't find the bug anywhere. Well, maybe if you got your head out of the tape drive and logged on to the machine you might stand a chance! Clearly someone in Hollywood thought the 3420 looked more like a computer than the actual machine (not that the WOPR really looked like one either). Come to think of it, didn't the WOPR have a lot of blinking lights and possibly bubble tubes? ETH
Re: Perpetuating Myths about the zSeries
On Thu, Nov 06, 2003 at 02:00:27PM -0500, Gregg C Levine wrote: Hmm.. What I loved about the film, was the kid's computer. It was supposed to be an S-100 based unit, one of the IMSAI jobs. And naturally the modem as well. Smile when you say that, pardner...my first computer (which I still own) started out in life as an IMSAI (though without the front panel; even though it was problematic, I still regret the decision not to get one), and I own a Hayes Micromodem 100 board for it. I believe that you could get an acoustic coupler to plug into that. The Micromodem 100 would do pulse (but not tone) dialing, as well, if it were hooked up to the phone line directly.
Re: Perpetuating Myths about the zSeries
On Thu, 2003-11-06 at 19:53, Jim Sibley wrote: The only problem is that they now look exactly like the new pSeries boxes! A few tri-color fans and some neon tubes should fit in any budget for a new zSeries machine. Anyone ready for a web site devoted to zSeries casemodding then? ;-) The first S/390 casemod that I know of was done by Gary who put a see-through side on his P/390 because he was told not to operate it without a cover. Rob
Re: Perpetuating Myths about the zSeries
Hello from Gregg C Levine And why wouldn't I? Thank you for bringing that up. --- Gregg C Levine [EMAIL PROTECTED] The Force will be with you...Always. Obi-Wan Kenobi Use the Force, Luke. Obi-Wan Kenobi (This company dedicates this E-Mail to General Obi-Wan Kenobi ) (This company dedicates this E-Mail to Master Yoda ) -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Jay Maynard Sent: Thursday, November 06, 2003 2:03 PM To: [EMAIL PROTECTED] Subject: Re: [LINUX-390] Perpetuating Myths about the zSeries On Thu, Nov 06, 2003 at 02:00:27PM -0500, Gregg C Levine wrote: Hmm.. What I loved about the film, was the kid's computer. It was supposed to be an S-100 based unit, one of the IMSAI jobs. And naturally the modem as well. Smile when you say that, pardner...my first computer (which I still own) started out in life as an IMSAI (though without the front panel; even though it was problematic, I still regret the decision not to get one), and I own a Hayes Micromodem 100 board for it. I believe that you could get an acoustic coupler to plug into that. The Micromodem 100 would do pulse (but not tone) dialing, as well, if it were hooked up to the phone line directly.
Re: Perpetuating Myths about the zSeries
Finally, everyone *knows* it's not a high-performance machine unless it's liquid-cooled. Get with the program, guys! Go check your z990 specs! It has on-board refrigeration with fan back-up. :-) (z900 may have it , too...i don't remember) Yeah, but you don't get a neat clear panel so you can *see* the PFCL splash around, and there's no place to put filters, oxygenation equipment, and fish in the system. (No joke -- there is a special model of the Cray J90 with a window into the PFCL processor coolant system that allows using the visible reservoir as a fish tank. Has special filters and oxy equipment to keep the fish aereated. It was manufactured for a Japanese company). HW RPQ, anyone? -- db
Re: Perpetuating Myths about the zSeries
How about George Madl's new mower? Given his grief with it, it *better* perform, as well as make julienne fries...8-) -- db David Boyes Sine Nomine Associates The z900 does also, and the G6. I'm not sure about the G5.. The z800 is all air cooled. (My lawnmower is liquid cooled. Does that mean it is high performance too? ;-) I've seen your lawnmower. That would be a big No. Alan Altmark Sr. Software Engineer IBM z/VM Development
Re: Perpetuating Myths about the zSeries
On Thu, 2003-11-06 at 13:03, Rob van der Heij wrote: A few tri-color fans and some neon tubes should fit in any budget for a new zSeries machine. Anyone ready for a web site devoted to zSeries casemodding then? ;-) The first S/390 casemod that I know of was done by Gary who put a see-through side on his P/390 because he was told not to operate it without a cover. If I didn't *like* my PC Server 325's case so much. (of course, that *is* a casemod. Just not an exciting one) Adam
Re: Perpetuating Myths about the zSeries
On Thu, 2003-11-06 at 13:57, Rob van der Heij wrote: [Yelling] Adam! David is after your cough syrup again! Too late, I think. Adam
Re: Perpetuating Myths about the zSeries
I have a gutted G-3 in my garage. Anyone want the CF cards? Or the power supply parts? I use it as a toolbox, so I really LIKE the old door locks. Means some schmoe wont be able to figure out how to get into it if they break in the garage |-+ | | Hall, Ken (IDS | | | ECCS) | | | [EMAIL PROTECTED]| | | Sent by: Linux on| | | 390 Port | | | [EMAIL PROTECTED]| | | IST.EDU | | || | || | | 11/06/2003 12:57 | | | PM | | | Please respond to| | | Linux on 390 Port| | || |-+ --| | | | To: [EMAIL PROTECTED] | | cc: | | Subject: Re: Perpetuating Myths about the zSeries | --| Okay, now add a slave display from the Service Element to the front door, and put up a cool screensaver, such as [EMAIL PROTECTED], and you've got something. Besides, it would give the operator/technician something to look at when things go wrong, so he doesn't have to open the back door. Especially important if he doesn't have one of those new keys for the new locks. -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] Behalf Of Jim Sibley Sent: Thursday, November 06, 2003 1:53 PM To: [EMAIL PROTECTED] Subject: Re: [LINUX-390] Perpetuating Myths about the zSeries Well, IBM has painted the zSeries black, put a cool copper reflective strip on it, and changed the door locks! The external cables are orange for ESCON and bright yellow for FICON. The only problem is that they now look exactly like the new pSeries boxes! = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree
Re: Perpetuating Myths about the zSeries
Hello (again) from Gregg C Levine It's a thought. Yes, sounds like an interesting idea. --- Gregg C Levine [EMAIL PROTECTED] The Force will be with you...Always. Obi-Wan Kenobi Use the Force, Luke. Obi-Wan Kenobi (This company dedicates this E-Mail to General Obi-Wan Kenobi ) (This company dedicates this E-Mail to Master Yoda ) -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of James Melin Sent: Thursday, November 06, 2003 3:26 PM To: [EMAIL PROTECTED] Subject: Re: [LINUX-390] Perpetuating Myths about the zSeries I have a gutted G-3 in my garage. Anyone want the CF cards? Or the power supply parts? I use it as a toolbox, so I really LIKE the old door locks. Means some schmoe wont be able to figure out how to get into it if they break in the garage |-+ | | Hall, Ken (IDS | | | ECCS) | | | [EMAIL PROTECTED]| | | Sent by: Linux on| | | 390 Port | | | [EMAIL PROTECTED]| | | IST.EDU | | || | || | | 11/06/2003 12:57 | | | PM | | | Please respond to| | | Linux on 390 Port| | || |-+ - - | | | | To: [EMAIL PROTECTED] | | cc: | | Subject: Re: Perpetuating Myths about the zSeries | - - | Okay, now add a slave display from the Service Element to the front door, and put up a cool screensaver, such as [EMAIL PROTECTED], and you've got something. Besides, it would give the operator/technician something to look at when things go wrong, so he doesn't have to open the back door. Especially important if he doesn't have one of those new keys for the new locks. -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] Behalf Of Jim Sibley Sent: Thursday, November 06, 2003 1:53 PM To: [EMAIL PROTECTED] Subject: Re: [LINUX-390] Perpetuating Myths about the zSeries Well, IBM has painted the zSeries black, put a cool copper reflective strip on it, and changed the door locks! The external cables are orange for ESCON and bright yellow for FICON. The only problem is that they now look exactly like the new pSeries boxes! = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree
Re: Perpetuating Myths about the zSeries
Well, I think we're all pretty much agreed that it's the *perception* of poor CPU performance, rather than the reality, that's the big Linux/zSeries problem. Therefore, may I present the following humble suggestion, aimed at correcting this misperception among the, well, young and impressionable? The *next* machine in the line should be named the zR0XxX0R and have a model number of 1337. IBM needs to replace one of the side panels of the z/Series frame with a transparent plexi window, and fill up the inside with cold-cathode flourescent lights in a variety of different nauseating colors. Probably those should have dimmer switches wired to a mike so that your zSeries can flash in time to the beat. Also, round all the corners off. That makes it look cooler. And add some bubble columns to the front; all *real* high-powered computers have glowing bubble columns. Adding some overclocking settings to the HMC would help its image a bunch too. They don't have to actually *do* anything; just add a bunch of poorly-(or for more eliteness, completely un-)documented options, almost all of which result in the machine failing to IPL or halting in a disabled wait state shortly into the IPL. Finally, everyone *knows* it's not a high-performance machine unless it's liquid-cooled. Get with the program, guys! Adam
Re: Perpetuating Myths about the zSeries
Adam: I like where you're going with this idea... People's perception of a computer are completely out of line with reality. I know this is a bit off track, but it does remind me of the movie War Games. I distinctly remember a chubby guy (computer geek) sticking his head in a 3420 tape drive (vacuum door opened) and saying I've checked the computer and I can't find the bug anywhere. Well, maybe if you got your head out of the tape drive and logged on to the machine you might stand a chance! Clearly someone in Hollywood thought the 3420 looked more like a computer than the actual machine (not that the WOPR really looked like one either). Come to think of it, didn't the WOPR have a lot of blinking lights and possibly bubble tubes? ETH PS: Chubby did have a pocket protector with pencils/pens in it... clearly another stereotype of us computer geeks! ---SNIP--- front; all *real* high-powered computers have glowing bubble columns.
Re: Perpetuating Myths about the zSeries
Any idea what the effort would be to write agents that work on NT, SUN, HP, AIX, Apple, and more? Yes, probably as well as you do. That doesn't change the fact that at least one reasonably common tool doesn't use SNMP at this time. The SNMP agents ARE FREE. Well supported. Cheap to utilize. And if you want additional free software, there are products that gather snmp data and display it - for free... Irrelevant if the product collecting the data doesn't understand them. SNMP is a very accepted standard. The HOST mibs are very well defined and widely implemented. What fits the Linux model better, free snmp agents or vendor provided/Proprietary agents? If the world was perfect, all tools WOULD use SNMP. The world isn't perfect, and all tools don't use SNMP. Do I think it's right? No. Have I recommended a better tool? Yes. Does the customer want to change tools? No. Do I get to change the tool? No. Was I asked to find out if there were RMF-PM agents for other platforms? Yes. If you'd like to continue this discussion off-list, I'd be happy to pursue it there. -- db
Re: Perpetuating Myths about the zSeries
I don't understand the need for each and every tool to have its own client, when the same information is available -- usually with much less system overhead -- using SNMP. IMHO, monitors in the Linux space need to support technologies like SNMP. To not do so, and to insist on your own monitoring client, is to perpetuate the same vendor lock-in mechanisms that customers are switching to Open Source to avoid. I agree. That doesn't change the fact that some poor sods have invested in tools that don't do SNMP and there is inertia to initiating such a change. If the tool they choose to use doesn't speak SNMP, then it doesn't much matter whether SNMP is better or not -- it's not in the picture. I don't know if an RMF collector for Linux is going to be able to tell you much more than you'd get via SNMP anyway. So those folks used to seeing anything and everything about their MVS systems in RMF could get very disappointed in the amount of information they get about their Linux systems. See above. The RMF-PM collector exists for Linux on 390 and Z; there are tools that understand and accept such data directly (FCON/ESA is one of them). RMF-PM isn't perfect, and it's not the best available tool, but if the choice is between some data and no data, then at least some data is usually better than none. In consulting gigs, one can recommend, but not mandate -- if the customer has the tool they want to use, then that's what they're going to use, no matter how wonderful some other tool may be. I like (and recommended) ESALPS; the customer doesn't (for mostly non-technical reasons). At the end of the day, you have to work with what you have. -- db
Re: Perpetuating Myths about the zSeries
Gee David, calling people names? So all of the customers that invested in a product that just can't do the job are poor sods? And now you want IBM to write a bunch of agents to protect these poor sod investments? Sorry, just could not resist... Any idea what the effort would be to write agents that work on NT, SUN, HP, AIX, Apple, and more? How many person years to implement something that has already been done via SNMP on all of these platforms? It would be unlike IBM to make the probably VERY VERY LARGE investment to re-invent something that is already there and totally free. The SNMP agents ARE FREE. Well supported. Cheap to utilize. And if you want additional free software, there are products that gather snmp data and display it - for free... The fact that ESALPS uses these agents was a result of a very expensive research project in writing agents. Writing agents is TRULY VERY expensive, they hard to maintain, do not keep up with new releases, and are rejected by customers when they see the cost of operations. The two tools that use RMFPM are Fcon/performance toolkit, and RMF. I'm unaware of any others. At one point i thought i would support it as well - but having one measurement facility for one platform and another on all of the other platforms did not make sense. The same thing applies to having Linux on VM provide Monitor records using the interface that Neale provided. Would make sense if there was only one platform - Linux under z/VM. But that is not today's environment nor tomorrow's. It is just not a valid long term strategy. SNMP is a very accepted standard. The HOST mibs are very well defined and widely implemented. What fits the Linux model better, free snmp agents or vendor provided/Proprietary agents? For those interested, our website does have additional information about this on the presentations area: http://velocitysoftware.com/present.html; Also, the IBM Redbooks Linux on IBM eServer zSeries and S/390: ISP/ASP Solutions (SG24-6299) and Linux on IBM eServer zSeries and S/390 Performance Measurement and Tuning (SG24-6926) show uses of SNMP derived data. Use of RMFPM data is noticeably absent... From: David Boyes [EMAIL PROTECTED] I don't understand the need for each and every tool to have its own client, when the same information is available -- usually with much less system overhead -- using SNMP. IMHO, monitors in the Linux space need to support technologies like SNMP. To not do so, and to insist on your own monitoring client, is to perpetuate the same vendor lock-in mechanisms that customers are switching to Open Source to avoid. I agree. That doesn't change the fact that some poor sods have invested in tools that don't do SNMP and there is inertia to initiating such a change. If the tool they choose to use doesn't speak SNMP, then it doesn't much matter whether SNMP is better or not -- it's not in the picture. I don't know if an RMF collector for Linux is going to be able to tell you much more than you'd get via SNMP anyway. So those folks used to seeing anything and everything about their MVS systems in RMF could get very disappointed in the amount of information they get about their Linux systems. See above. The RMF-PM collector exists for Linux on 390 and Z; there are tools that understand and accept such data directly (FCON/ESA is one of them). RMF-PM isn't perfect, and it's not the best available tool, but if the choice is between some data and no data, then at least some data is usually better than none. In consulting gigs, one can recommend, but not mandate -- if the customer has the tool they want to use, then that's what they're going to use, no matter how wonderful some other tool may be. I like (and recommended) ESALPS; the customer doesn't (for mostly non-technical reasons). At the end of the day, you have to work with what you have. -- db If you can't measure it, I'm Just NOT interested!(tm) // Barton Robinson - CBW Internet: [EMAIL PROTECTED] Velocity Software, IncMailing Address: 196-D Castro Street P.O. Box 390640 Mountain View, CA 94041 Mountain View, CA 94039-0640 VM Performance Hotline: 650-964-8867 Fax: 650-964-9012 Web Page: WWW.VELOCITY-SOFTWARE.COM //
Re: Perpetuating Myths about the zSeries
YES, ABSOLUTELY! (TCO used to be called economy of scale ). TCO includes quite a bit more than that - investment lifetime (amortisation period) for one. -- Phil Payne http://www.isham-research.com +44 7785 302 803
Re: Perpetuating Myths about the zSeries
On Fri, 31 Oct 2003, David Boyes wrote: Advertising aside, that's nice (and I knew that), but there are people who don't have, and can't/won't get ESALPS, and/or are already using another performance tool on another platform that understands RMF-PM input. I don't understand the need for each and every tool to have its own client, when the same information is available -- usually with much less system overhead -- using SNMP. IMHO, monitors in the Linux space need to support technologies like SNMP. To not do so, and to insist on your own monitoring client, is to perpetuate the same vendor lock-in mechanisms that customers are switching to Open Source to avoid. I don't know if an RMF collector for Linux is going to be able to tell you much more than you'd get via SNMP anyway. So those folks used to seeing anything and everything about their MVS systems in RMF could get very disappointed in the amount of information they get about their Linux systems. Hoo-roo, Vic
AW: Perpetuating Myths about the zSeries
well, CPU-Performance is not everything. Its always a question about the whole system (and application behavior). If you need pure CPU-Power then IBM z/Series would be the wrong choice. But most of the systems don't have permanent a cpu high load. If you need availability and scalability (on-demand freatures), then you more on the right path with the z/Series. But Linux needs for that dynamic reconfiguration tools ! (like dynamic memory,cpu attach and detach) On the other hand, my experience is that many of the projects take a look on the coast side (with a very partial focus) and nothing else...and that's the fault of the sales people. They sell systems and later on a very clever guy is coming with the consolidation idear. But in the meanwhile every little server is so special in his application behavior and SLA's that nobody takes the coast of the consolidation. Hypothetical consolidation is a big word, practical it's a lot of work and also cost intensiv then. Clever people may think about a long term strategy and then z/Series would be more cost-effective. -Ursprüngliche Nachricht- Von: Post, Mark K [mailto:[EMAIL PROTECTED] Gesendet: Donnerstag, 30. Oktober 2003 22:49 An: [EMAIL PROTECTED] Betreff: Re: Perpetuating Myths about the zSeries My answer was, and still is (and likely always will be) avoid any application that is CPU intensive. Yes, the zSeries has gotten faster, but so has Intel. The price-performance curve for CPU intensive work still favors Intel. I've seen nothing in the IBM announcements that would lead me to change any of the recommendations I've been making for the last 3 years. Unless and until the price-performance curve for zSeries matches that of Intel (or comes a couple of orders of magnitude closer), I will continue to make the same recommendations. Mark Post -Original Message- From: Jim Sibley [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 29, 2003 7:31 PM To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries -snip- Linux on all sorts of platforms was just a gleam in someone's eye 5 years ago. It started getting pushed on the zSeries 3 years ago and the software and hardware have made great strides in the last 3 years. So CGI may not be appropriate today. So what is there we said was not appropriate 2 or 3 years ago that may be appropriate today on Linux zSeries? = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Exclusive Video Premiere - Britney Spears http://launch.yahoo.com/promos/britneyspears/
Re: Perpetuating Myths about the zSeries
What about memory intensive? And how do you gage the CPU intensive applications? For example we are planning to migrate some of our Solaris (SPARC) applications off of SPARC and into the z/VM Linux world. If I am looking at candidates for this migration I see systems (SPARC) with 10 - 30 percent utilization. What happens when I decide these word loads are good candidates with their low cpu usage on the SPARC platform but then install them into the Z environment and find out that they now have a cpu usage of 80 - 90 percent? Is this possible? Is there a good way to judge what applications on a given platform might be best suited for migration? Right now I am recommending that any candidate first do a QA of their application in the Z environment prior to do doing the full and final migration. thanks! Eric Sammons (804)697-3925 FRIT - Infrastructure Engineering Post, Mark K [EMAIL PROTECTED] Sent by: Linux on 390 Port [EMAIL PROTECTED] 10/30/2003 04:49 PM Please respond to Linux on 390 Port To: [EMAIL PROTECTED] cc: Subject:Re: Perpetuating Myths about the zSeries My answer was, and still is (and likely always will be) avoid any application that is CPU intensive. Yes, the zSeries has gotten faster, but so has Intel. The price-performance curve for CPU intensive work still favors Intel. I've seen nothing in the IBM announcements that would lead me to change any of the recommendations I've been making for the last 3 years. Unless and until the price-performance curve for zSeries matches that of Intel (or comes a couple of orders of magnitude closer), I will continue to make the same recommendations. Mark Post -Original Message- From: Jim Sibley [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 29, 2003 7:31 PM To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries -snip- Linux on all sorts of platforms was just a gleam in someone's eye 5 years ago. It started getting pushed on the zSeries 3 years ago and the software and hardware have made great strides in the last 3 years. So CGI may not be appropriate today. So what is there we said was not appropriate 2 or 3 years ago that may be appropriate today on Linux zSeries? = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Exclusive Video Premiere - Britney Spears http://launch.yahoo.com/promos/britneyspears/
Re: Perpetuating Myths about the zSeries
The best way to understand is to take measurements of the running production systems. There are many tools for doing this and you may already be gathering at least the data that you would need. The way to look at utilization is to plot the utilization on intervals for a peak period, day or week depending on the data you get. You should be looking for intervals around 15 minutes in length. (Over 30 minutes smooths things too much and under 5 minutes tends to visually (and mathematically) hide the troughs. This is because we use the peak to do our sizings and as intervals get shorter the peak approaches 100% regardless of the utilization. (utilization is a statistic, at the cycle level the machine is either busy or its not. For short enough intervals the utilization data will either be zero or 100%.) Anyway, you then stack these graphs in a stacked bar or area chart and then note the composite peak. This will tell you what your aggregate utilization is. If all the servers peak at the same time, then the peak utilization on each one has to be low to get favorable consolidation effects. The second thing to do is to look at the saturation curve for the servers in question. Gather throughput data on the same intervals as your utilization data. (This can be network data or packet rates if you don't have anything else). Plot the throughput v utilization for each server. You are looking to see if the curve bends over or saturates at higher utilization. I like to plot linear, power and logaritmic trends through the data. Usually the power curve has the best fit, but the linear and logarithmic curves provide bounds with which to compare it. The more linear the data is the more cpu intense the application is, and therefore the lower the utilization has to be to get a good conversion ratio. If the curve is bent over, the average utilzation is low, and the workload peaks at an off time you have an ideal candidate. You can also start gathering I/O and context switch rates. High rates here usually indicate non cpu intense or mixed applications. IBM has people who can help you get the data and analyze it. Joe Temple [EMAIL PROTECTED] 845-435-6301 295/6301 cell 914-706-5211 home 845-338-8794 Eric Sammons [EMAIL PROTECTED]To: [EMAIL PROTECTED] t.frb.org cc: Sent by: Linux onSubject: Re: Perpetuating Myths about the zSeries 390 Port [EMAIL PROTECTED] IST.EDU 10/31/2003 07:55 AM Please respond to Linux on 390 Port What about memory intensive? And how do you gage the CPU intensive applications? For example we are planning to migrate some of our Solaris (SPARC) applications off of SPARC and into the z/VM Linux world. If I am looking at candidates for this migration I see systems (SPARC) with 10 - 30 percent utilization. What happens when I decide these word loads are good candidates with their low cpu usage on the SPARC platform but then install them into the Z environment and find out that they now have a cpu usage of 80 - 90 percent? Is this possible? Is there a good way to judge what applications on a given platform might be best suited for migration? Right now I am recommending that any candidate first do a QA of their application in the Z environment prior to do doing the full and final migration. thanks! Eric Sammons (804)697-3925 FRIT - Infrastructure Engineering Post, Mark K [EMAIL PROTECTED] Sent by: Linux on 390 Port [EMAIL PROTECTED] 10/30/2003 04:49 PM Please respond to Linux on 390 Port To: [EMAIL PROTECTED] cc: Subject:Re: Perpetuating Myths about the zSeries My answer was, and still is (and likely always will be) avoid any application that is CPU intensive. Yes, the zSeries has gotten faster, but so has Intel. The price-performance curve for CPU intensive work still favors Intel. I've seen nothing in the IBM announcements that would lead me to change any of the recommendations I've been making for the last 3 years. Unless and until the price-performance curve for zSeries matches that of Intel (or comes a couple of orders of magnitude closer), I will continue to make the same recommendations. Mark Post -Original Message- From: Jim Sibley [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 29, 2003 7:31 PM To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries -snip- Linux on all sorts of platforms was just a gleam in someone's eye 5 years ago. It started getting pushed on the zSeries 3 years ago and the software and hardware have made great strides in the last 3 years. So CGI may not be appropriate today. So what is there we said was not appropriate 2 or 3 years ago that may be appropriate today on Linux zSeries
Re: Perpetuating Myths about the zSeries
What about memory intensive? And how do you gage the CPU intensive applications? For example we are planning to migrate some of our Solaris (SPARC) applications off of SPARC and into the z/VM Linux world. Something that occurred to me (and since Joe Temple is kindly answering questions): are there any plans to make the RMF-PM data collection agent available on platforms other than zLinux? While it's not the best tool available, it'd be handy to be able to do before/after comparisons with the same tools reporting to FCON/Perf Toolkit. Right now I am recommending that any candidate first do a QA of their application in the Z environment prior to do doing the full and final migration. Never a bad idea, particularly given your employer...8-) -- db
Re: Perpetuating Myths about the zSeries
Memory intensive applications are not automatically disqualified, but need to be looked at individually, and in terms of what the impact will be on the rest of the systems/z/VM. As Joe and Barton both recommend (and you say you do), measurement is the only way to get a handle on whether a particular piece of work will be a good candidate or not, and whether it will fit within your existing capacity. A particular CPU utilization on a RISC box is _likely_ to result in a higher CPU utilization on the mainframe. That's not guaranteed by any means, which just reinforces the need to actually measure on both platforms. Your current practice of having people actually install their application on Linux/390 and measure the results is absolutely the best way to go, in my mind. No guessing involved, real numbers gathered, etc. Mark Post -Original Message- From: Eric Sammons [mailto:[EMAIL PROTECTED] Sent: Friday, October 31, 2003 7:55 AM To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries What about memory intensive? And how do you gage the CPU intensive applications? For example we are planning to migrate some of our Solaris (SPARC) applications off of SPARC and into the z/VM Linux world. If I am looking at candidates for this migration I see systems (SPARC) with 10 - 30 percent utilization. What happens when I decide these word loads are good candidates with their low cpu usage on the SPARC platform but then install them into the Z environment and find out that they now have a cpu usage of 80 - 90 percent? Is this possible? Is there a good way to judge what applications on a given platform might be best suited for migration? Right now I am recommending that any candidate first do a QA of their application in the Z environment prior to do doing the full and final migration. thanks! Eric Sammons (804)697-3925 FRIT - Infrastructure Engineering Post, Mark K [EMAIL PROTECTED] Sent by: Linux on 390 Port [EMAIL PROTECTED] 10/30/2003 04:49 PM Please respond to Linux on 390 Port To: [EMAIL PROTECTED] cc: Subject:Re: Perpetuating Myths about the zSeries My answer was, and still is (and likely always will be) avoid any application that is CPU intensive. Yes, the zSeries has gotten faster, but so has Intel. The price-performance curve for CPU intensive work still favors Intel. I've seen nothing in the IBM announcements that would lead me to change any of the recommendations I've been making for the last 3 years. Unless and until the price-performance curve for zSeries matches that of Intel (or comes a couple of orders of magnitude closer), I will continue to make the same recommendations. Mark Post -Original Message- From: Jim Sibley [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 29, 2003 7:31 PM To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries -snip- Linux on all sorts of platforms was just a gleam in someone's eye 5 years ago. It started getting pushed on the zSeries 3 years ago and the software and hardware have made great strides in the last 3 years. So CGI may not be appropriate today. So what is there we said was not appropriate 2 or 3 years ago that may be appropriate today on Linux zSeries? = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Exclusive Video Premiere - Britney Spears http://launch.yahoo.com/promos/britneyspears/
Re: Perpetuating Myths about the zSeries
Eric I've published a methodology for doing this kind of migration planning. The presentation can be found at HTTP://velocitysoftware.com/present/ConsTECH Probably for what you want, start at HTTP://velocitysoftware.com/present/ConsTECH/sld015.html One day I'll add some notes to make this a little less cryptic. I know there are people that try and get their predictions to 3 significant figures in terms of what your end z requiremengs will be, but considering all the variables, it's just not possible. But you can use a simple estimator - if it is off by 5-10%, that I think is very acceptable. At the rate that Linux on z is improving, this methodology will prove to be conservative. For moving Linux workloads from Intel to z, I use a number of 4 for Mhz to MIPS, thus if your application has a requirement for a one minute period of 1GHz, then you could assume it would have a 250 MIP requirement for a minute. If this must run on a single processor, your choice of z processors becomes limited, you would not do this on a G5 or G6. For a proof of concept that I worked on for Windows apps on Intel to Linux on z, the number seemed closer to 6. If you would like to have some help getting the numbers for a SUN application, I would be delighted to help It would make for a very publishable case study. From: Eric Sammons [EMAIL PROTECTED] What about memory intensive? And how do you gage the CPU intensive applications? For example we are planning to migrate some of our Solaris (SPARC) applications off of SPARC and into the z/VM Linux world. If I am looking at candidates for this migration I see systems (SPARC) with 10 - 30 percent utilization. What happens when I decide these word loads are good candidates with their low cpu usage on the SPARC platform but then install them into the Z environment and find out that they now have a cpu usage of 80 - 90 percent? Is this possible? Is there a good way to judge what applications on a given platform might be best suited for migration? Right now I am recommending that any candidate first do a QA of their application in the Z environment prior to do doing the full and final migration. thanks! Eric Sammons (804)697-3925 FRIT - Infrastructure Engineering If you can't measure it, I'm Just NOT interested!(tm) // Barton Robinson - CBW Internet: [EMAIL PROTECTED] Velocity Software, IncMailing Address: 196-D Castro Street P.O. Box 390640 Mountain View, CA 94041 Mountain View, CA 94039-0640 VM Performance Hotline: 650-964-8867 Fax: 650-964-9012 Web Page: WWW.VELOCITY-SOFTWARE.COM //
Re: Perpetuating Myths about the zSeries
But David, this is now TOTALLY possible using ESALPS. ESALPS collects data from SUN, HP, WinNT, Linux, and anything else that either NETSNMP supports, or has their own native SNMP implementation... (but thanks for asking) And of course when you get to Linux on zVM, ESALPS even provides correct numbers... From: David Boyes [EMAIL PROTECTED] What about memory intensive? And how do you gage the CPU intensive applications? For example we are planning to migrate some of our Solaris (SPARC) applications off of SPARC and into the z/VM Linux world. Something that occurred to me (and since Joe Temple is kindly answering questions): are there any plans to make the RMF-PM data collection agent available on platforms other than zLinux? While it's not the best tool available, it'd be handy to be able to do before/after comparisons with the same tools reporting to FCON/Perf Toolkit. -- db If you can't measure it, I'm Just NOT interested!(tm) // Barton Robinson - CBW Internet: [EMAIL PROTECTED] Velocity Software, IncMailing Address: 196-D Castro Street P.O. Box 390640 Mountain View, CA 94041 Mountain View, CA 94039-0640 VM Performance Hotline: 650-964-8867 Fax: 650-964-9012 Web Page: WWW.VELOCITY-SOFTWARE.COM //
Re: Perpetuating Myths about the zSeries
I don't know of any plans to make RMF-PM available on other platforms. I will look around, but it will be a week or so; others may be able to help sooner. Joe Temple [EMAIL PROTECTED] 845-435-6301 295/6301 cell 914-706-5211 home 845-338-8794 David Boyes [EMAIL PROTECTED]To: [EMAIL PROTECTED] e.net cc: Sent by: Linux onSubject: Re: Perpetuating Myths about the zSeries 390 Port [EMAIL PROTECTED] IST.EDU 10/31/2003 11:48 AM Please respond to Linux on 390 Port What about memory intensive? And how do you gage the CPU intensive applications? For example we are planning to migrate some of our Solaris (SPARC) applications off of SPARC and into the z/VM Linux world. Something that occurred to me (and since Joe Temple is kindly answering questions): are there any plans to make the RMF-PM data collection agent available on platforms other than zLinux? While it's not the best tool available, it'd be handy to be able to do before/after comparisons with the same tools reporting to FCON/Perf Toolkit. Right now I am recommending that any candidate first do a QA of their application in the Z environment prior to do doing the full and final migration. Never a bad idea, particularly given your employer...8-) -- db
Re: Perpetuating Myths about the zSeries
My answer was, and still is (and likely always will be) avoid any application that is CPU intensive. Yes, the zSeries has gotten faster, but so has Intel. The price-performance curve for CPU intensive work still favors Intel. I've seen nothing in the IBM announcements that would lead me to change any of the recommendations I've been making for the last 3 years. Unless and until the price-performance curve for zSeries matches that of Intel (or comes a couple of orders of magnitude closer), I will continue to make the same recommendations. Mark Post -Original Message- From: Jim Sibley [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 29, 2003 7:31 PM To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries -snip- Linux on all sorts of platforms was just a gleam in someone's eye 5 years ago. It started getting pushed on the zSeries 3 years ago and the software and hardware have made great strides in the last 3 years. So CGI may not be appropriate today. So what is there we said was not appropriate 2 or 3 years ago that may be appropriate today on Linux zSeries? = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Exclusive Video Premiere - Britney Spears http://launch.yahoo.com/promos/britneyspears/
Re: Perpetuating Myths about the zSeries
I have enjoyed the responses to these questions I posed regarding what is a transaction. So to continue, I was asked, what are the JVMs doing? In most cases are JVMs are little utilized, they all run in the wonderful world of WebSphere thus they all communicate to a Database and to an LDAP (Secureway most likely). In they end they all perform different functions, some go through the process of managing users and their web access and web privileges, some deal with business centric stuff. Again, though they are generally under utilized in the wonderful world of Solaris systems. But then again it is hard to over utilize a SunFire 480 with 4+ GB of Memory. So I guess I can't really answer this question, but what I can say is our developers need to be more talented when it comes to developing reliable java code. It seems there are many JVMs in our environment that will exceed the initial HEAP size of 128 MB, thus at ~129, the JVM pulls another 128 MB of memory into its little world. What impact if any would this behavior have in an environment where you z/VM w/ 25 guests each running some number of JVMs and most of these JVMs start grabbing memory? Will one guests JVM growth impact another system's attempt or ability to grab more memory for its JVMs? thanks Eric Sammons (804)697-3925 FRIT - Infrastructure Engineering David Boyes [EMAIL PROTECTED] Sent by: Linux on 390 Port [EMAIL PROTECTED] 10/28/2003 02:05 PM Please respond to Linux on 390 Port To: [EMAIL PROTECTED] cc: Subject:Re: Perpetuating Myths about the zSeries Well my question then is, what is a transaction? A very good question, and exactly why the how many PCs can I consolidate? question is basically a useless one. The answer has to include what the PCs are doing and how they do it. It's comparing apples and pumpkins. So what is a transaction? The best definition I've come up with is a seqence of operations that accomplishes a single unique unit of business typical to an application, defined by the type of problem and type of application. Would I want to run 100 systems in a given z/VM each with some number of JVMs, yes WebSphere? What do the applications running in the JVMs do? 8-) On the topic of availability I am not sure I buy the whole MF is better than 80x86 or Intel / AMD 64 server hardware. Today, everything is redundant and everything is hot swapable. But not to the point of being able to intercept failed instructions and re-dispatch on pre-installed spare hardware, unless you've bought a really Tandem or some such system, at which point you're not paying much less than the equivalent zSeries. Correcting failure in flight isn't yet possible in Intel hardware systems, and even with the Opteron and Itanium, it won't be easy. None of the Intel systems share instruction pipelines yet. Now if some of the rumors about using a PowerPC core for the next gen zSeries processors are true, or that IBM licenses some of the PowerPC or zSeries multicore fab technology to AMD or Intel to make MCM-style platters of Intel engines, that might change the picture drastically. I don't see that happening, but it'd be a very interesting change in the landscape. What is different is to get a new stick or replacement stick of memory for the MF could cost you and automobile and you won't find that stick of memory at the local computer store. No, you'll find your IBM CE showing up at your door with the correct replacement in his hand before you even know it failed...8-) -- db
Re: Perpetuating Myths about the zSeries
I have enjoyed the responses to these questions I posed regarding what is a transaction. So to continue, I was asked, what are the JVMs doing? Not the JVMs, what are the *applications* doing? The JVMs respond in a predictable way; it's the applications that make things messy. But then again it is hard to over utilize a SunFire 480 with 4+ GB of Memory. Not really..8-). I've got a couple apps that you could use those spare cycles on if you're not using them for anything else... So I guess I can't really answer this question, but what I can say is our developers need to be more talented when it comes to developing reliable java code. It seems there are many JVMs in our environment that will exceed the initial HEAP size of 128 MB, thus at ~129, the JVM pulls another 128 MB of memory into its little world. What impact if any would this behavior have in an environment where you z/VM w/ 25 guests each running some number of JVMs and most of these JVMs start grabbing memory? Will one guests JVM growth impact another system's attempt or ability to grab more memory for its JVMs? There is a physical limit, of course -- the sum of physical memory available to the VM system plus all the VM paging areas, plus any Linux paging areas on real DASD. The behavior you describe will cause the individual virtual machine working set (the amount of physical memory actually mapped to virtual pages) to increase, up to the virtual machine size defined in the CP directory. There must be enough physical memory in the zSeries to accomodate the sum of all the virtual machiens sharing the physical resources if you want to avoid paging (not necessarily a bad thing, but you need to plan ahead for it). If the JVMs inside a single instance continue to increase demands for resources, then the Linux system will begin to page. If your Linux paging areas are defined as CP VDISK, then that does have an impact on VM behavior, as pages used for VDISK aren't available for use for virtual machines (although CP does a very good job of shuffling things around, thus my comment about VM paging not necessarily being a bad thing). Linux paging is generally not desirable if it's avoidable, but a certain amount is acceptable. So, it depends. If all your JVMs expand at the same time in all your virtual machines, CP will have to do a lot of work to cope with that, and if you exhaust all your physical resources, then you could affect the other instances. On the positive side, you can control that expansion by a combination of virtual machine size and use of VDISK size limits, and it just works, along with having excellent utilization measurements to prove to the apps people that their code sucks. The only way you're going to find out if that is a possibility is to look at application utilization and performance data and see how often your JVMs are expanding now and what the applications are doing to cause that. You can then go back to the programmers and apply the appropriate corrective actions. -- db
Re: Perpetuating Myths about the zSeries
And here we agree. Bringing BogusMIPS into the discussion was like throwing a mouse in front of a cat. You distracted us from your real point: That things are better now than they used to be. Sorry, I've noticed that you so easy to distract. I'll keep that in mind! ;-) = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Exclusive Video Premiere - Britney Spears http://launch.yahoo.com/promos/britneyspears/
Re: Perpetuating Myths about the zSeries
Barton wrote: There were two redbooks this year that looked at many performance issues. If anything, they were productive in finding performance issues that needed to be addressed. I'm not addressing tuning, but rather taking issue with the fact that there is very little recent performance and capacity information. A lot of measurements were done early, s/390z Linux was categorized, then a series of assumptions have become the mythology of Linux on zSeries. Since then, zSeries has changed a lot. One of things that happens when machines become faster and bigger is that they enable more applications on that platform. Until the G6, CMOS techonolgy was even slower than the bipolar technology that it replace. Now, the zSeries has caught up with and now exceeds its previous capabilities spurred on by the competition of intel and Unix processors. We need to evaluate some of the assumptions we made early on (3 years ago!). For example: August, 2003, share http://www.linuxvm.org/present/index.html Thoss's report on performance is about the lastest there is: z900 z216, F20 shark, ESCON/FICON, gigabit ethernet, 2.4.7 or 2.4.17 kernel, 31 bit. Currently available to the customer: z990, 800 shark, FICON, 2.4.21 kernel, hipersockets, 64 bit. (some of the test in Thoss's report were memory constrained in a 31 bit environment). Also, - the redbooks emphasize overall VM tuninig with really very little information about tuning Linux - especially if you have to run in an LPAR - a dearth of information on things like DB2, websphere - lots of reports on how zseries scaled up to an equivalent intel, but not really very much of what happens when zseries scales BEYOND intel (both in CPs and I/O) - large transactions volumes and rates. = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? The New Yahoo! Shopping - with improved product search http://shopping.yahoo.com
Re: Perpetuating Myths about the zSeries
David wrote: I'm not convinced it's even valid there, if there is any type of virtualization (LPAR or VM) active and there are shared resources. Alan's right - bogomips is a red flag! And the assumption that all people take their numbers for VM instances in production environments is interesting (and knee jerk). I think I should change my handle to Lonely_Lpar_in_a_controlled lab_guy. All the numbers I quoted come from test systems ipl'd in LPARs with very little or no load on any LPAR when the ipl took place, so the numbers are generally say repeatable within 5-10%, so it is a decent indiciation of the relative capabitilty of the single engine within the s/390-zSeries. So, rather grousing about bogomips, what standard measure do you have that can measure the relative speed of the processors! = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Exclusive Video Premiere - Britney Spears http://launch.yahoo.com/promos/britneyspears/
Re: Perpetuating Myths about the zSeries
On Wed, 2003-10-29 at 10:43, Jim Sibley wrote: So, rather grousing about bogomips, what standard measure do you have that can measure the relative speed of the processors! Quake. (Mostly just kidding.) Adam
Re: Perpetuating Myths about the zSeries
On Wednesday, 10/29/2003 at 08:43 PST, Jim Sibley [EMAIL PROTECTED] wrote: So, rather grousing about bogomips, what standard measure do you have that can measure the relative speed of the processors! I keep trying to say that the speed of the processor is not the measure. It is the throughput of the workload that you are running that is important. If your application is I/O bound, faster processors just mean you have more free time to wait for the I/O to complete. On the other hand, if you have more than one process/virtual machine/lpar, it means you can get more work done while waiting for the I/O to complete. (Assumption: I/O wait is independent of processor speed.) Of course, OSA and FCP QDIO (DMA) changes that picture a bit since all of a sudden the amount of data moving in/out is proportional to the CPU's ability to process the queues. Or is it? What if I have two CPUs operating a single DMA queue? Three CPUs? Gaaack! #include LSPR.txt Alan Altmark Sr. Software Engineer IBM z/VM Development
Re: Perpetuating Myths about the zSeries
I recall someone from QNX (Quantum Computing) who wanted some Dhoomstone ratings. John R. Campbell, Speaker to Machines (GNUrd) {813-356|697}-5322 Adsumo ergo raptus sum MacOS X: Because making Unix user-friendly was easier than debugging Windows. IBM Certified: IBM AIX 4.3 System Administration, System Support - Forwarded by John Campbell/Tampa/IBM on 10/29/2003 12:35 PM - Adam Thornton [EMAIL PROTECTED]To: [EMAIL PROTECTED] mine.netcc: Sent by: Linux onSubject: Re: [LINUX-390] Perpetuating Myths about the zSeries 390 Port [EMAIL PROTECTED] IST.EDU 10/29/2003 11:53 AM Please respond to Linux on 390 Port On Wed, 2003-10-29 at 10:43, Jim Sibley wrote: So, rather grousing about bogomips, what standard measure do you have that can measure the relative speed of the processors! Quake. (Mostly just kidding.) Adam
Re: Perpetuating Myths about the zSeries
Jim, might I suggest you apply to the next redbook??? As for your LPAR, that probably explains a lot. In the real world, I can't figure out why a customer who has to pay for it would dedicate an LPAR to Linux that is at best today 1.3Ghz (z990), at 1-2 orders of magnitude more in price than a say 3Ghz Intel chip (With memory that is a lot less $$) and oh by the way, I recently lost a proof of concept because until we get complete FCP, the I/O on the ix platform was faster. With VM, we can share the resource and utilize resources much more effectively and at higher utilization, and the price points are very different. So, there are proper platforms for each workload, and though we would like lots of it on z, some just runs better elsewhere. What z does better is not single big things, but lots and lots of little things. So we're probably in total agreement, some successful PUBLISHED analysis of current capacity would be really nice, but comparing LPAR to any other platform realistically would likely make the LPAR look expensive. (In which case you couldn't publish it, right?) One POC i've been looking at it is replacing several boxes with virtual machines with very detailed analysis. It would I think be better to get this published than other benchmarketed numbers. And I absolutely LOVE this quote from Dale: (An other, unrelated truth, I always liked A little experience can ruin lot of good theory. A quote often attributed it Enstine.) From: Jim Sibley [EMAIL PROTECTED] Barton wrote: There were two redbooks this year that looked at many performance issues. If anything, they were productive in finding performance issues that needed to be addressed. I'm not addressing tuning, but rather taking issue with the fact that there is very little recent performance and capacity information. A lot of measurements were done early, s/390z Linux was categorized, then a series of assumptions have become the mythology of Linux on zSeries. Since then, zSeries has changed a lot. One of things that happens when machines become faster and bigger is that they enable more applications on that platform. Until the G6, CMOS techonolgy was even slower than the bipolar technology that it replace. Now, the zSeries has caught up with and now exceeds its previous capabilities spurred on by the competition of intel and Unix processors. We need to evaluate some of the assumptions we made early on (3 years ago!). For example: August, 2003, share http://www.linuxvm.org/present/index.html Thoss's report on performance is about the lastest there is: z900 z216, F20 shark, ESCON/FICON, gigabit ethernet, 2.4.7 or 2.4.17 kernel, 31 bit. Currently available to the customer: z990, 800 shark, FICON, 2.4.21 kernel, hipersockets, 64 bit. (some of the test in Thoss's report were memory constrained in a 31 bit environment). Also, - the redbooks emphasize overall VM tuninig with really very little information about tuning Linux - especially if you have to run in an LPAR - a dearth of information on things like DB2, websphere - lots of reports on how zseries scaled up to an equivalent intel, but not really very much of what happens when zseries scales BEYOND intel (both in CPs and I/O) - large transactions volumes and rates. If you can't measure it, I'm Just NOT interested!(tm) // Barton Robinson - CBW Internet: [EMAIL PROTECTED] Velocity Software, IncMailing Address: 196-D Castro Street P.O. Box 390640 Mountain View, CA 94041 Mountain View, CA 94039-0640 VM Performance Hotline: 650-964-8867 Fax: 650-964-9012 Web Page: WWW.VELOCITY-SOFTWARE.COM //
Re: Perpetuating Myths about the zSeries
Alan wrote: Of course, OSA and FCP QDIO (DMA) changes that picture a bit since all of a sudden the amount of data moving in/out is proportional to the CPU's ability to process the queues. Or is it? What if I have two CPUs operating a single DMA queue? Three CPUs? Gaaack! What about a 64 bit Linux DB2 server with several databases of several hundred gigabytes accepting queries from numerous Linux guests over hipersockets on a TREX? (31 bit Linux is limited by memory size in this scenario, not CP or I/O speed to the databases - though I/O speed to the swap devices is!). I keep trying to say that the speed of the processor is not the measure. It is the throughput of the workload that you are running that is important. If your application is I/O bound, faster processors just mean you have more free time to wait for the I/O to complete. On the other hand, if you have more than one process/virtual machine/lpar, it means you can get more work done while waiting for the I/O to complete. (Assumption: I/O wait is independent of processor speed.) I disagree about your I/O wait is independent of processor speed because that is only one component of reponse time. The faster processor can initiate I/O's faster and can service interrupts faster, thus reducing internal queue wait times. (And you put Linux on a 3390-9 image, that's what you're going to get - a lot of queue wait time on your databases!). As far as transaction transit time, that's a combination of cp processor speed, memory speed (whether or not the transaction or its data is paged), and I/O response. The faster CPU reduces the cp part of the equation, the memory speed is reduced by increasing the amount of memory (64 bit addressibility) and the CP memory speed, and I/O speed is improved by faster I/O devices + the spread of I/O over the channels and devices (i.e., tuninig 101). So increasing processor speed has a significant effect on transaction time. As to transaction volume or rate, that also is a function of cp speed, the number of cp's, the memory speed, and the I/O rates. However, what tends to happen in a shop is that they put something like a shark in, with its 16 logical control units and concentrate their data in a single LCU on a 3390-9 image! So who cares how fast your CPU is - its waiting on the LCU, which is REALLY a physical CU - 1 raid array and a 3390-9 image is behind the same single subchannel as a 3390-3 image in Linux (unless you use Thoss's trick with VM managing the PAV's). Et voila, an I/O bottleneck and a faster CP does in fact wait faster (or slower?)! So, lets assume that the I/O subsystem, the CP's, and memory are tuned properly for the workload rather than imposing a supposed workload on the model. (I know that there is no ideal workload - what one has to deal with is the workload of the people trying to run it on a given OS with given hardware). Then the question still is: Are there workloads that we have measured in the past and written off as inappropriate for a zSeries that is now enabled with the lastest zSeries technology. Some questions come to mind: - Has the CP speed of the TREX (2084) enabled applications that were not acceptable on a previous machine? - Have the latest improvements in Linux 2.4.19 and 2.4.21 and the IBM drivers enabled applications that were not acceptable on a previous release? - Has the latest hipersockets implementations reduced CPU time and thus reduced I/O wait on the hipersockets? - Has the greater backend bandwidth of the later zSeries (2 Gbyte vs 1 Gbyte) reduced the I/O wait to memory and the devices? - Has a properly tuned Total Storage Server 800 (shark) with FICON reduced the I/O wait to the devides? - Have the improvements to Java, DB2, webshpere, etc, or has the processor speed of the TREX enabled more applications? Or, as Alan so rightly points out, How many transactions per second can I handle and At what cost? (Reducing TCO is only acceptable as a measure if I can get as much or more work done as fast or faster). By the by, as to Bogomip repeatability, as in any other measure, if there is a competing load, then you expect the numbers to vary, sometimes greatly. In a controlled lab, the bogomip number is repeatable and consistent between single processors. For example: environment: 4 shared CP on Trex 16 CP processor, with 5 active LPARS. The HMC shows an average of 26% total CP utilization. 3 ipls in a LPAR: 2398.61 bogomips 2398.61 2398.61 3 ipls under VM: 2398.61 bogomips 2398.61 2398.61 0% error, 100% repeatably! The 5-10% error I mentioned in an earlier note is with a heavier load on the LPARs. The error under a loaded VM for bogomips can be as much as a factor or 9 or more! = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Exclusive Video Premiere - Britney Spears
Re: Perpetuating Myths about the zSeries
It breaks down to religious shouting because the differences (the VM features) become too numerous to count and wind up as fundamental to the VMer's zSeries experience. Or like two queens at a dress ball arguing over who has the best costume when both costumes do what they're supposed to do - cover the naughty bits and keep you warm (zOS vs zVM) while the youngster (Linux) is trying to find out what new things they can do, making the two old queens irrelevant. (of course, a lot of people don't need all the VM features, but that's a side issue). Some observations: - VM does not sell as many zSeries as zOS does and the business people want to maximize profit. - Windows, zOS and zVM are expensive, proprietary OS and IBM and others would like to replace them at all levels with Linux. So, the real question is how can we leverage Linux on a zSeries in both LPAR mode and under VM and compete with other OS and platforms? What have we discarded in the past 3 years because of the slow zSeries and limited memory that we can resurrect on a TREX/FICON/ES800 to increase the zSeries repetoire of competitive applications? = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Exclusive Video Premiere - Britney Spears http://launch.yahoo.com/promos/britneyspears/
Re: Perpetuating Myths about the zSeries
I'm not convinced it's even valid there, if there is any type of virtualization (LPAR or VM) active and there are shared resources. Alan's right - bogomips is a red flag! And the assumption that all people take their numbers for VM instances in production environments is interesting (and knee jerk). I think I should change my handle to Lonely_Lpar_in_a_controlled lab_guy. Jim, Read the comment again. if signals a conditional expression. *If* you have your LPARs defined with shared processors or *IF* you have VM present and *IF* there happens to be a demand for CPU from another LPAR or virtual machine that is sharing those processors, THEN you will not get repeatable results for the bogomips calculation, and the numbers WILL vary widely. *If* you can afford to dedicate physical processors to single Linux instances, then you are a) considerably more fortunate than most of us, b) have a very unusual workload, and/or c) in a lab situation. You've argued your case for running in LPARs here before -- no need to rehash it again. Perhaps we don't agree on terminology: I consider CPU sharing between LPARs a form of virtualization -- note I did not say VM, I said virtualization -- because in that case, there is not a one-to-one correspondence between physical hardware and logical hardware at all times. Do you agree? If not, then that might be the disconnect here. So, rather grousing about bogomips, what standard measure do you have that can measure the relative speed of the processors! What kind of instruction mix do you want to sample? You can use the standard benchmarks, but they're pretty much just as useless on zSeries as they are on every other platform. You could use LSPR numbers, but they're pretty much measuring z/OS workloads, which don't tell you much of anything useful about Linux. You could use a set of representative applications (which is what we do when we do performance comparisons), but without a large database of performance data on other platforms, nobody agrees with you or pays any attention to your tests. The person you want to ask about zSeries processor performance is Bob Rogers in POK. He designed the bloody things -- ask him what he uses to measure them. You may also find the IBM Systems Journal issue on the z900 helpful; there was some detailed discussion of CPU internals in those articles. -- db
Re: Perpetuating Myths about the zSeries
Barton wrote: Jim, might I suggest you apply to the next redbook??? Love to, but the Boss won't let me ('nuff said). He wants me to go back and so zOS. My forays into PT for Linux are during slack times and weekends and when I can get on the hardware. I got on the TREX by promising to do a beta for POK. As for your LPAR, that probably explains a lot. In the real world, I can't figure out why a customer who has to pay for it would dedicate an LPAR to Linux that is at best today 1.3Ghz (z990), at 1-2 orders of magnitude more in price than a say 3Ghz Intel chip (With memory that is a lot less $$) and oh by the way, I recently lost a proof of concept because until we get complete FCP, the I/O on the ix platform was faster. (but had to be distributed!) Question: What many transactions can I do in a given time for what cost? You're still thinking the PC model - with among other things a lot of CPU idle, a lot of memory idle, and database dispersed over a lot of disk drives that can only be reached via the net. On the other end of the scale, think of this: - 16 x 1.3 GHZ cp's running fairly busy - several terabytes of data, any byte available within a few ms. - 64GB shared memory so any processor and work on any workload, thus balancing resources. - abilitity run all kinds of workloads simultaneouly (interactive, transaction server, batch) in a single image. - single OS image so that administrative support is cheaper (security, data management, etc). TCO used to be called economy of scale. = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Exclusive Video Premiere - Britney Spears http://launch.yahoo.com/promos/britneyspears/
Re: Perpetuating Myths about the zSeries
Actually, while I/O is the classic example of why processor speed is not everything, you don't have to move that far beyond the processor itself to show this. Note that the various types of servers have different size and structures of L1, L2, L3, caches and memory interfaces. Also note that the memory latency and bandwidth varies from machine to machine. Finally note that the faster the processor the higher the latencies are in terms of number of cycles. Benchmarks are sensitive to this, but not uniformly. For example TPC-C got a big boost from 8MB L2 caches, but SPECint is almost totally insensitive to L2 size. Real work has a tendency to show even more variability. That is there are more daemons,etc. chewing up cache space causing more misses, and the code is not as extensively tuned. This tide will float all boats (even if you run tpc-c for a living don't expect the tuned rates if you also are running security, monitoring, accounting,etc.). However, the differences in memory hierarchy will cause some machines to be impacted more than others. There is really no way except experience to tell how an application treats the memory hierarchy in this regard. As a result defining relative capacity with a single metric is not possible, and any benchmark or metric that is suggested for this will not match the real world, which will exhibit more dynamic variability both with time and workload. On thing is for sure though: If a cache is blown, during the miss time the processor is 100% busy as measured by normal means, and the throughput is zero. You can see if this is happening to an significant extent by plotting throughput v processor utilization. (Throughput can come from the application or be estimated by network data rate). Draw linear, power and logaritmic trends through the data. If the best trend is linear and the line intercepts the vertical axis near or below the origin then there is little or no saturation and little pressure on the memory hierarchy. If the best trend is logarithmic then there is heavy saturation and chances are that the workload is blowing the caches as load is applied. If the power curve fits the best the answer is somewhere in the middle. As the exponent of the power curve approaches 1 the workload is exhibiting less saturation because the trend becomes linear. The more saturation exhibited the higher the utilization at which zLinux consolidation is viable. This is because the raw conversion factor is typically better for more saturated workloads. My point is there is no such thing as a metric which defines the relative capacity of various servers because of the differences in their memory hierarchies. Processor speed is an indicator, but it is insufficient to do any real comparisons. The processor speeds are what they are. You can measure them with MHz, BOGOMIPS, SPECint, or hello world and a stop watch. You still will not understand the relative capacity for any particular piece of work once you do. This is because work always causes the CPU speed differences to be mitigated by other bottlenecks such as waiting for memory or I/O and the impact varies with workload, time, and machine architecture. The rules developed early on were based on some intuitive understanding that heavy computational work is less impacted by non processor bottlenecks, whereas transactional workloads with lots of locking and data sharing are more impacted. Since most other machines come from a heritage which emphasizes the former and zSeries heritage is firmly in the latter, the intuitive choices are generally correct and don't change with normal evolutionary changes in processor speed. Joe Temple [EMAIL PROTECTED] 845-435-6301 295/6301 cell 914-706-5211 home 845-338-8794 Alan Altmark/Endicott/To: [EMAIL PROTECTED] [EMAIL PROTECTED]cc: Sent by: Linux onSubject: Re: Perpetuating Myths about the zSeries 390 Port [EMAIL PROTECTED] IST.EDU 10/29/2003 02:00 PM Please respond to Linux on 390 Port On Wednesday, 10/29/2003 at 10:08 PST, Jim Sibley [EMAIL PROTECTED] wrote: I disagree about your I/O wait is independent of processor speed because that is only one component of reponse time. The faster processor can initiate I/O's faster and can service interrupts faster, thus reducing internal queue wait times. The faster processor can start more I/Os per second than a slower processor. A faster I/O processor can move data off the channel into memory faster (and vice versa). The speed of the channel itself and the device does not change. Yes, you CAN change it, but it is a function (and price) that is independent of CPU selection. As far as transaction transit time, that's
Buffer overflows [Was: Perpetuating Myths about the zSeries]
On Tue, Oct 28, 2003 at 10:49:05AM -0800, Fargusson.Alan wrote: | If I may ramble on a bit: one thing I have noticed is that all | systems I have worked with have one common problem, which is | programs that try to access memory regions outside of the | allocated virtual memory for the process. On Windows this | results in the famous general protection fault, on Unix it | results in the famous segmentation fault, and on z/OS it is the | famous SOC4. I wonder if there isn't a better way to deal with | this problem then just aborting the program. Users find this | problem really annoying. I thought that was S0C4. Well, there's aborting the programmer :-) Seriously, lower level languages (those with pointers and addresses) should be left to those who have lower level experience (e.g. have done assembler). Higher level applications languages (Java, C#, Pike, Python, and others) have built in protection for this kind of problem, and in some cases specific non-failure behaviour (maybe good or bad). When programmers tend to think too abstractly about a problem, they tend to forget the details. Then they need something to handle those details for them. OTOH, when programming, I tend to consider those details all the time. That lets me avoid this kind of crash. But it also takes me out of the running programming higher level applications. -- - | Phil Howard KA9WGN | http://linuxhomepage.com/ http://ham.org/ | | (first name) at ipal.net | http://phil.ipal.org/ http://ka9wgn.ham.org/ | -
Re: Perpetuating Myths about the zSeries
On Tue, Oct 28, 2003 at 02:09:10PM -0500, David Boyes wrote: | This is programmer error -- the hardware is doing exactly what it should do, | methinks. Correcting the developers usually helps, although that's much | harder. I've yet to find a programming language or toolset that doesn't do | exactly what the programmer tells it to do, even if it's stupid...8-) OTOH, there are programming environments that change the behaviour, for better or for worse. What to do when you have an array of 100 elements numbered 0 to 99 and the code attempts to access element number 100? I've seen languages that define it to be modulo the size, so it's an access to element 0. That might be cute, but could be worse than an abrupt abort. The ability to isolate things so that aborting something doesn't put the rest at risk is the way to go about it. In Linux (or Unix) this is the process. Consider a pre-forking daemon and code that under certain conditions will crash. Let the process die and the master process can recover by logging the situation and starting a new process to keep the service level up. BTW, this is one of the reasons I avoid threads; there is the risk of cross-task damage. -- - | Phil Howard KA9WGN | http://linuxhomepage.com/ http://ham.org/ | | (first name) at ipal.net | http://phil.ipal.org/ http://ka9wgn.ham.org/ | -
Re: Perpetuating Myths about the zSeries
Alan wrote: I would only ask that you complete the picture by factoring in costs.Changes in prices of energy, people, real estate, machines, etc., can bring on board workloads that were previously out of reach. This is the core of the TCO argument. Are you able to achieve acceptable results at a price you're willing to pay? And is that total cost (not just acquisition costs of s/w h/w) to deploy the workload less than you pay now? YES, ABSOLUTELY! (TCO used to be called economy of scale ). No argument there at all! The right tool for the right job for the best cost It's certainly easy to focus on just the technology, which can be used to make an initial cut (nope, probably still shouldn't use the mainframe for your next feature-length animated film) but let's not forget the other parts of the TCO equation. And its just as easy to rely on old, out-dated data (my original point). The technology is moving so fast. Linux on all sorts of platforms was just a gleam in someone's eye 5 years ago. It started getting pushed on the zSeries 3 years ago and the software and hardware have made great strides in the last 3 years. So CGI may not be appropriate today. So what is there we said was not appropriate 2 or 3 years ago that may be appropriate today on Linux zSeries? = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Exclusive Video Premiere - Britney Spears http://launch.yahoo.com/promos/britneyspears/
Perpetuating Myths about the zSeries
When Linux started on the s390 over 3 years ago, a lot of work was done to see what Linux on the mainframe was good for. But that was with lower levels of linux (2.2.16, 2.4.7) and slower machines (mp2000, mp3000, g5,g6). Now that Trex is GA, has anyone gone back and re-examined the mythology? - Is Trex capable of a wider range of applications, with more cpu intensive workloads? - How well do the current kernels (2.4.21 - RHEL3 or SLES8 SP3) scale, both in cpu workloads and I/O workloads. - One of the presentations I noticed from IBM Germany inidicated that a few Linux were better than many linux and a single linux. It turns out the single linux was limited by memory. How does 64bit affect linux performance if given a lot of memory. The reasons I speculate is our old friend - bogomips. For various s/390-zSeries processers, they run something like this (SLES8 SP2, 2.4.19 kernel, except for mp2000 at SLES7). mp2000 - less than 200 bogomips 9672-zz7 (g6) 630 bogomips 2064-116 (z1) 820 bogomips 2084-b16 (Trexx GA1) 2400 bogomips! The speed of the top of the line zSeries has increased at four fold in the last 3-4 years. It seems that the literature is lagging what is now available in the field. Is zSeries more competitive now against other platforms than it was four years ago? (caution: my 1749 MHZ intel registers 3538 bogomips, for whatever that is worth). = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Exclusive Video Premiere - Britney Spears http://launch.yahoo.com/promos/britneyspears/
Re: Perpetuating Myths about the zSeries
On Tuesday, 10/28/2003 at 08:57 PST, Jim Sibley [EMAIL PROTECTED] wrote: mp2000 - less than 200 bogomips 9672-zz7 (g6) 630 bogomips 2064-116 (z1) 820 bogomips 2084-b16 (Trexx GA1) 2400 bogomips! The speed of the top of the line zSeries has increased at four fold in the last 3-4 years. It seems that the literature is lagging what is now available in the field. Is zSeries more competitive now against other platforms than it was four years ago? Why should anyone give a rats behind about bogomips numbers? A four-fold increase in bogomips says only that bogomips runs 4 times as fast as it used to. Your question about comparisons of competitiveness is interesting, but not in the context of bogomips. I would ask if TCO has improved in the last 3-4 years. The CPU selection is, of course, only one variable in the equation. Alan Altmark Sr. Software Engineer IBM z/VM Development
Re: Perpetuating Myths about the zSeries
On Tue, 2003-10-28 at 10:57, Jim Sibley wrote: The speed of the top of the line zSeries has increased at four fold in the last 3-4 years. I'd be amazed if Intel hasn't done at least this well too. Adam
Re: Perpetuating Myths about the zSeries
On Tue, Oct 28, 2003 at 11:21:23AM -0600, Adam Thornton wrote: | On Tue, 2003-10-28 at 10:57, Jim Sibley wrote: | The speed of the top of the line zSeries has increased | at four fold in the last 3-4 years. | | I'd be amazed if Intel hasn't done at least this well too. It probably has. But CPU power isn't the whole story, either. I'd ask about how fast a machine built around an Intel/AMD CPU can deal with multiple devices concurrently transferring data for read or write I/O operations. If you need sheer computation power, zSeries is probably not right for you (how about PPC?). But if you need a large, high traffic, high uptime, database, you don't really want the kinds of machines typically built around Intel CPUs. -- - | Phil Howard KA9WGN | http://linuxhomepage.com/ http://ham.org/ | | (first name) at ipal.net | http://phil.ipal.org/ http://ka9wgn.ham.org/ | -
Re: Perpetuating Myths about the zSeries
Ignoring BogoMIPS arguments for the time being and returning to what I think Jim was really asking: Our original recommendations as to what type of workloads were good matches for the 390 architecture were based on the G5/G6 boxes, now that we have the z990 with its enhanced instruction pipelining, improved floating point performance, and faster cycle speed, do we need to revisit our assumptions about what workloads are now good candidates? I don't have a z990 (drats) so can't answer this question. Actuarial calculations are probably still not good choices for zSeries but what of the others we initially discarded?
Re: Perpetuating Myths about the zSeries
I have heard the story line, If you have high transaction volume, then you don't want Big Blue IRON. Well my question then is, what is a transaction? Is this a computation, is this prime number generation, is this high volume websites, or is this is a large database with a Tbyte of data running 1000s of SQL statements in a brief moment of time, for example a web application adding users to a secure way ldap (back-end is db2)? So what is a transaction? When is a instruction not a transaction and if everything is in some way a transaction what exactly is Linux on the MF good for, other than supporting a virtual environment of previously installed and largely under utilized distributed systems? Would I want to run 100 systems in a given z/VM each with some number of JVMs, yes WebSphere? We are talking about putting 6 JVMs onto a single Linux guest. I am looking forward to this as during our POC (proof of Concept) we never tested with more than 1 may be 2. On the topic of availability I am not sure I buy the whole MF is better than 80x86 or Intel / AMD 64 server hardware. Today, everything is redundant and everything is hot swapable. What is different is to get a new stick or replacement stick of memory for the MF could cost you and automobile and you won't find that stick of memory at the local computer store. Thoughts??? Eric Sammons (804)697-3925 FRIT - Infrastructure Engineering Phil Howard [EMAIL PROTECTED] Sent by: Linux on 390 Port [EMAIL PROTECTED] 10/28/2003 01:14 PM Please respond to Linux on 390 Port To: [EMAIL PROTECTED] cc: Subject:Re: Perpetuating Myths about the zSeries On Tue, Oct 28, 2003 at 11:21:23AM -0600, Adam Thornton wrote: | On Tue, 2003-10-28 at 10:57, Jim Sibley wrote: | The speed of the top of the line zSeries has increased | at four fold in the last 3-4 years. | | I'd be amazed if Intel hasn't done at least this well too. It probably has. But CPU power isn't the whole story, either. I'd ask about how fast a machine built around an Intel/AMD CPU can deal with multiple devices concurrently transferring data for read or write I/O operations. If you need sheer computation power, zSeries is probably not right for you (how about PPC?). But if you need a large, high traffic, high uptime, database, you don't really want the kinds of machines typically built around Intel CPUs. -- - | Phil Howard KA9WGN | http://linuxhomepage.com/ http://ham.org/ | | (first name) at ipal.net | http://phil.ipal.org/ http://ka9wgn.ham.org/ | -
Re: Perpetuating Myths about the zSeries
Well, I have been told that some of the Intel servers are coming up to speed in the following areas, but in most other architectures, you get an outage if you have a memory error. On a zArch box, you might not even see this because the hardware will replace failing memory automagically. This is done strictly in the hardware by sweeping through memory doing testing during off periods to find these soft errors and fix them before they become a hard error. And although it is not touted very much, the zArch implementations (and previous ones as well) are constantly doing internal cross-checking to verify correct results. That's one of the reasons that it is slower than other CPUs. On a zArch machine, you know you are getting the correct result (correct as in the hardware did not cause the problem. Not correct as in you program was doing what you thought it was doing grin). On some others, you are not sure because a glitch could cause an undetected error. Sort of like running without parity on your memory. BTW, did you know that the data path on an Intel process from the main memory to the CPU does not have parity or ECC? So, even with ECC memory, and errors can creap in if the error occurs during this movement of data. The zMachines do use ECC on these internal data paths. Likewise, every zArch box has at least one extra CP that cannot be assigned. If a CP fails, this extra CP can usually take over operation of the failing CP without the underlying software needing to do any kind of recovery at all. The software will get an indication of a hardware error and the box will call home. Again, on most other boxes, this would result in a outage. Perhaps just a reIPL (uh, reboot), but you might be down until the server is fixed or replaced. (most likely replaced anymore). -- John McKown Senior Systems Programmer UICI Insurance Center Applications Solutions Team +1.817.255.3225 This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its' content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited. -Original Message- From: Eric Sammons [mailto:[EMAIL PROTECTED] Sent: Tuesday, October 28, 2003 12:27 PM To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries I have heard the story line, If you have high transaction volume, then you don't want Big Blue IRON. Well my question then is, what is a transaction? Is this a computation, is this prime number generation, is this high volume websites, or is this is a large database with a Tbyte of data running 1000s of SQL statements in a brief moment of time, for example a web application adding users to a secure way ldap (back-end is db2)? So what is a transaction? When is a instruction not a transaction and if everything is in some way a transaction what exactly is Linux on the MF good for, other than supporting a virtual environment of previously installed and largely under utilized distributed systems? Would I want to run 100 systems in a given z/VM each with some number of JVMs, yes WebSphere? We are talking about putting 6 JVMs onto a single Linux guest. I am looking forward to this as during our POC (proof of Concept) we never tested with more than 1 may be 2. On the topic of availability I am not sure I buy the whole MF is better than 80x86 or Intel / AMD 64 server hardware. Today, everything is redundant and everything is hot swapable. What is different is to get a new stick or replacement stick of memory for the MF could cost you and automobile and you won't find that stick of memory at the local computer store. Thoughts??? Eric Sammons (804)697-3925 FRIT - Infrastructure Engineering Phil Howard [EMAIL PROTECTED] Sent by: Linux on 390 Port [EMAIL PROTECTED] 10/28/2003 01:14 PM Please respond to Linux on 390 Port To: [EMAIL PROTECTED] cc: Subject:Re: Perpetuating Myths about the zSeries On Tue, Oct 28, 2003 at 11:21:23AM -0600, Adam Thornton wrote: | On Tue, 2003-10-28 at 10:57, Jim Sibley wrote: | The speed of the top of the line zSeries has increased | at four fold in the last 3-4 years. | | I'd be amazed if Intel hasn't done at least this well too. It probably has. But CPU power isn't the whole story, either. I'd ask about how fast a machine built around an Intel/AMD CPU can deal with multiple devices concurrently transferring data for read or write I/O operations. If you need sheer computation power, zSeries is probably not right for you (how about PPC?). But if you need a large, high traffic, high uptime, database, you don't really want the kinds of machines typically built around Intel CPUs
Re: Perpetuating Myths about the zSeries
The level of redundancy is not the same in the Intel/AMD world as it is in the mainframe. In many cases this does not matter. You only need the redundancy if something goes wrong. In many cases an Intel based server is very reliable. It is hard to compare CPU power, but it seems to me that Intel has a big advantage in CPU speed right now. Software is another matter. I am not a fan of Windows, and I find the number of hangs to be annoying. It does work well enough for most web applications though. Linux seems to be very reliable, although I don't have much direct experience with it. z/OS has its problems. When we went from a weekly IPL to a bi-weekly IPL our system crashed after about a week and a half because CSA was exhausted. If I may ramble on a bit: one thing I have noticed is that all systems I have worked with have one common problem, which is programs that try to access memory regions outside of the allocated virtual memory for the process. On Windows this results in the famous general protection fault, on Unix it results in the famous segmentation fault, and on z/OS it is the famous SOC4. I wonder if there isn't a better way to deal with this problem then just aborting the program. Users find this problem really annoying. -Original Message- From: Eric Sammons [mailto:[EMAIL PROTECTED] Sent: Tuesday, October 28, 2003 10:27 AM To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries I have heard the story line, If you have high transaction volume, then you don't want Big Blue IRON. Well my question then is, what is a transaction? Is this a computation, is this prime number generation, is this high volume websites, or is this is a large database with a Tbyte of data running 1000s of SQL statements in a brief moment of time, for example a web application adding users to a secure way ldap (back-end is db2)? So what is a transaction? When is a instruction not a transaction and if everything is in some way a transaction what exactly is Linux on the MF good for, other than supporting a virtual environment of previously installed and largely under utilized distributed systems? Would I want to run 100 systems in a given z/VM each with some number of JVMs, yes WebSphere? We are talking about putting 6 JVMs onto a single Linux guest. I am looking forward to this as during our POC (proof of Concept) we never tested with more than 1 may be 2. On the topic of availability I am not sure I buy the whole MF is better than 80x86 or Intel / AMD 64 server hardware. Today, everything is redundant and everything is hot swapable. What is different is to get a new stick or replacement stick of memory for the MF could cost you and automobile and you won't find that stick of memory at the local computer store. Thoughts??? Eric Sammons (804)697-3925 FRIT - Infrastructure Engineering Phil Howard [EMAIL PROTECTED] Sent by: Linux on 390 Port [EMAIL PROTECTED] 10/28/2003 01:14 PM Please respond to Linux on 390 Port To: [EMAIL PROTECTED] cc: Subject:Re: Perpetuating Myths about the zSeries On Tue, Oct 28, 2003 at 11:21:23AM -0600, Adam Thornton wrote: | On Tue, 2003-10-28 at 10:57, Jim Sibley wrote: | The speed of the top of the line zSeries has increased | at four fold in the last 3-4 years. | | I'd be amazed if Intel hasn't done at least this well too. It probably has. But CPU power isn't the whole story, either. I'd ask about how fast a machine built around an Intel/AMD CPU can deal with multiple devices concurrently transferring data for read or write I/O operations. If you need sheer computation power, zSeries is probably not right for you (how about PPC?). But if you need a large, high traffic, high uptime, database, you don't really want the kinds of machines typically built around Intel CPUs. -- - | Phil Howard KA9WGN | http://linuxhomepage.com/ http://ham.org/ | | (first name) at ipal.net | http://phil.ipal.org/ http://ka9wgn.ham.org/ | -
Re: Perpetuating Myths about the zSeries
On Tuesday, 10/28/2003 at 01:27 EST, Eric Sammons [EMAIL PROTECTED] wrote: Thoughts??? I don't think we're trying to compare (in this discussion, anyway) the relative merits of different platforms. The question at hand is whether the latest generation of zSeries hardware and software have improved the environment for hosting Linux to the point that you could consider it for things you might not have 3 years ago. That [presumably] consists of an evaluation of management, reliability, function, performance, speed, capacity, and any other measurement you consider appropriate. Further, consider the technological advances of the discrete solutions. Are they advancing at the same rate as zSeries? In all areas? In some areas? Does that have any affect on decision-making? Please limit responses to 10 typewritten pages, double-spaced, pica. Unsigned submissions will not be accepted by the editors for publication. You have 2 hours. Please begin. (And no talking...) ;-) Where's a PhD candidate when you need one? Alan Altmark Sr. Software Engineer IBM z/VM Development
Re: Perpetuating Myths about the zSeries
-Original Message- From: Fargusson.Alan [mailto:[EMAIL PROTECTED] Sent: Tuesday, October 28, 2003 12:49 PM To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries snip If I may ramble on a bit: one thing I have noticed is that all systems I have worked with have one common problem, which is programs that try to access memory regions outside of the allocated virtual memory for the process. On Windows this results in the famous general protection fault, on Unix it results in the famous segmentation fault, and on z/OS it is the famous SOC4. I wonder if there isn't a better way to deal with this problem then just aborting the program. Users find this problem really annoying. Well, with UNIX (sigaction()) and z/OS programs (ESTAE / ESPIE), the programmer can catch this error and attempt to recover. I am not very Windows literate, but I'd lay odds it has something similar. I don't know what the OS itself could do to automatically fix this. I'd say this problem can be laid at the feet of the application programmer. -- John McKown Senior Systems Programmer UICI Insurance Center Applications Solutions Team +1.817.255.3225 This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its' content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited.
Re: Perpetuating Myths about the zSeries
Well my question then is, what is a transaction? A very good question, and exactly why the how many PCs can I consolidate? question is basically a useless one. The answer has to include what the PCs are doing and how they do it. It's comparing apples and pumpkins. So what is a transaction? The best definition I've come up with is a seqence of operations that accomplishes a single unique unit of business typical to an application, defined by the type of problem and type of application. Would I want to run 100 systems in a given z/VM each with some number of JVMs, yes WebSphere? What do the applications running in the JVMs do? 8-) On the topic of availability I am not sure I buy the whole MF is better than 80x86 or Intel / AMD 64 server hardware. Today, everything is redundant and everything is hot swapable. But not to the point of being able to intercept failed instructions and re-dispatch on pre-installed spare hardware, unless you've bought a really Tandem or some such system, at which point you're not paying much less than the equivalent zSeries. Correcting failure in flight isn't yet possible in Intel hardware systems, and even with the Opteron and Itanium, it won't be easy. None of the Intel systems share instruction pipelines yet. Now if some of the rumors about using a PowerPC core for the next gen zSeries processors are true, or that IBM licenses some of the PowerPC or zSeries multicore fab technology to AMD or Intel to make MCM-style platters of Intel engines, that might change the picture drastically. I don't see that happening, but it'd be a very interesting change in the landscape. What is different is to get a new stick or replacement stick of memory for the MF could cost you and automobile and you won't find that stick of memory at the local computer store. No, you'll find your IBM CE showing up at your door with the correct replacement in his hand before you even know it failed...8-) -- db
Re: Perpetuating Myths about the zSeries
On Windows this results in the famous general protection fault, on Unix it results in the famous segmentation fault, and on z/OS it is the famous SOC4. I wonder if there isn't a better way to deal with this problem then just aborting the program. Users find this problem really annoying. This is programmer error -- the hardware is doing exactly what it should do, methinks. Correcting the developers usually helps, although that's much harder. I've yet to find a programming language or toolset that doesn't do exactly what the programmer tells it to do, even if it's stupid...8-) -- db
Re: Perpetuating Myths about the zSeries
On Tuesday, 10/28/2003 at 10:49 PST, Fargusson.Alan [EMAIL PROTECTED] wrote: If I may ramble on a bit: one thing I have noticed is that all systems I have worked with have one common problem, which is programs that try to access memory regions outside of the allocated virtual memory for the process. On Windows this results in the famous general protection fault, on Unix it results in the famous segmentation fault, and on z/OS it is the famous SOC4. I wonder if there isn't a better way to deal with this problem then just aborting the program. Users find this problem really annoying. [0C5, addressing exception, is what you get when you try to access memory not defined to your address space. 0C4, protection exception, is the result of trying to read or write memory that *is* part of your address space but for which memory protection mechanisms are in effect, such as storage keys and segment protection.] The problem isn't with the fault. The problem is in how the system handles it. If the system provides a way for the application to catch it (VM and MVS do), then you could let the application try to recover, but such exceptions usually mean the program is hosed in some fashion. Odds of a successful recovery (Oops! I'll go back and use the *right* pointer this time!) are poor. We separate the sheep from the goats when we look at how the system deals with such an error. It should, of course, be architecturally impossible for a wild address to destroy any part of the operating system, including any list of resources being used by the program. So, the operating system needs to close files, close sockets, release memory, decrement shared object counters, release held locks, delete semaphores, and so on. As though the program had Never Been. I've seen less robust operating systems lock up during this phase, or not clean up everything belonging to the program, including other programs. Alan Altmark Sr. Software Engineer IBM z/VM Development
Re: Perpetuating Myths about the zSeries
On Tue, 2003-10-28 at 13:09, David Boyes wrote: This is programmer error -- the hardware is doing exactly what it should do, methinks. Correcting the developers usually helps, although that's much harder. I've yet to find a programming language or toolset that doesn't do exactly what the programmer tells it to do, even if it's stupid...8-) I think you could make the case that PROLOG, when it's behaving nondeterministically, is *perhaps* not doing what the programmer tells it to. Oh, and there's Quantum INTERCAL--which, alas, lived on the late, lamented assurdo.com--which might, or might not, have been doing what you told it to do. Adam
Re: Perpetuating Myths about the zSeries
Of course this is a programmer error, and the hardware is doing the right thing. But is the OS doing the right thing? The programmer didn't ask the OS to abort the program. -Original Message- From: David Boyes [mailto:[EMAIL PROTECTED] Sent: Tuesday, October 28, 2003 11:09 AM To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries On Windows this results in the famous general protection fault, on Unix it results in the famous segmentation fault, and on z/OS it is the famous SOC4. I wonder if there isn't a better way to deal with this problem then just aborting the program. Users find this problem really annoying. This is programmer error -- the hardware is doing exactly what it should do, methinks. Correcting the developers usually helps, although that's much harder. I've yet to find a programming language or toolset that doesn't do exactly what the programmer tells it to do, even if it's stupid...8-) -- db
Re: Perpetuating Myths about the zSeries
-Original Message- From: Fargusson.Alan [mailto:[EMAIL PROTECTED] Sent: Tuesday, October 28, 2003 1:35 PM To: [EMAIL PROTECTED] Subject: Re: Perpetuating Myths about the zSeries Of course this is a programmer error, and the hardware is doing the right thing. But is the OS doing the right thing? The programmer didn't ask the OS to abort the program. Sure he did! He said: I'm too busy, stupid, or egotistical to handle this problem. You do something. In a case like this I cannot think of a generic action which could address the problem. If the programming is attempting to update a protected or non-existant location, should it just ignore the store? What if the information to be stored is critical to the future running of the program? What now? On reading a protected or non-existant location, what should be returned? binary zeros? What about combining these cases where a program thinks it has saved a critical calculation (such as your bonus for the year) but did it wrong and the OS said OK, I'll just ignore that store. It then tries to get your bonus amount from that location, sees that it is zero and doesn't give you your bonus? Wouldn't prefer that something terrible happen so that the end user will be force to check into it? Granted, a somewhat silly example, but the what to do simply cannot be generally answered by the OS. Only the application programmer can do this. And they refused (My programs are never in error, in error, in error, in error, ...) Actually, believe it or not, on an old IBM DOS system, a data exception would cause a message similar to: JOB TERMINATED DUE TO PROGRAM REQUEST A programmer screamed at me that his program did NOT request that the job be terminated! He was royally angry at this accusation. Not a good choice of words. I would have preferred something like: JOB TERMINATED DUE TO PROGRAMMER ERROR OR STUPIDITY! BIG GRIN -- John McKown Senior Systems Programmer UICI Insurance Center Applications Solutions Team +1.817.255.3225 This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its' content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited.
Re: Perpetuating Myths about the zSeries
I think you could make the case that PROLOG, when it's behaving nondeterministically, is *perhaps* not doing what the programmer tells it to. MMf. The argument on whether data-driven languages like Prolog or Standard ML are deterministic or not is a very fine line (and has nothing to do with Linux, so I won't go into it here). Since such languages *are* still rule-evaluation based, barring logical contradictions, they do have a predictable end state, thus at some frame of reference, the answer is a reflection of the logic the programmer coded. If there is not a deterministic end state, then you have a positive feedback loop and combinatorial explosion. Here be dragons indeed. Oh, and there's Quantum INTERCAL--which, alas, lived on the late, lamented assurdo.com--which might, or might not, have been doing what you told it to do. It's not clear that Intercal ever did *anything* useful, so I'll concede that one. -- db
Re: Perpetuating Myths about the zSeries
- Original Message - From: David Boyes [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Tuesday, October 28, 2003 1:09 PM Subject: Re: Perpetuating Myths about the zSeries On Windows this results in the famous general protection fault, on Unix it results in the famous segmentation fault, and on z/OS it is the famous SOC4. I wonder if there isn't a better way to deal with this problem then just aborting the program. Users find this problem really annoying. This is programmer error -- the hardware is doing exactly what it should do, methinks. Correcting the developers usually helps, although that's much harder. I've yet to find a programming language or toolset that doesn't do exactly what the programmer tells it to do, even if it's stupid...8-) So... you haven't used VBScript? Perhaps it barely qualifies as a programming language, but just today I wrote a script to delete a file, then create a new one with the same name. The new one has the same DateCreated as the file that was deleted. Unless you wait long enough between delete and create... then you get the current date/time. Working as designed, says Microsoft. :-/ -- jcf
Re: Perpetuating Myths about the zSeries
Of course this is a programmer error, and the hardware is doing the right thing. But is the OS doing the right thing? The programmer didn't ask the OS to abort the program. Ostensibly the reason that the OS is limiting access is to do resource access or utilization controls. If the application is attempting to do something that violates the controls placed on a resource, then the OS has two choices -- extend the controls according to a policy, or deny the request. In either case, the application has to cope with either a soft failure or a hard failure. If the application programmer doesn't deal with a hard failure, or the system programmer doesn't deal with it by having a soft retry exit like ESTAE active, then what's the OS supposed to do? Guess? The Symbolics OS had a policy setting for this sort of thing (setq `sym-access-violation-policy `some-kind-of-bitmap-vector-that-I-can't-remember), but it did have a bunch of stability problems, so the default was to abort processes that violated their resource constraints. I don't know too many others that have tried to handle this beyond the give the control back to the app, and default to kill if no app handler.
Re: Perpetuating Myths about the zSeries
Alan wrote: Why should anyone give a rats behind about bogomips numbers? A four-fold increase in bogomips says only that bogomips runs 4 times as fast as it used to. Your question about comparisons of competitiveness is interesting, but not in the context of bogomips. I would ask if TCO has improved in the last 3-4 years. The CPU selection is, of course, only one variable in the equation. As a relative measure between the zSeries platforms, it is an indication of relative speed. Between platforms, its really not useful. I totally agree that TCO is the main issue, but what I see is a bunch of myths developing around Linux on zSeries and the zSeries and its Linux ahead faster than the published measurements indicate. Its like the old TSO is slow myth vs CMS. In the few years of the of the s/360, TSO was slow and a lot of products tried to replace it (ROSCOE, etc). Once TSO got improved, the myth persisted. The performance literature for Linux is way behind what its real capabilites are today. = Jim Sibley Implementor of Linux on zSeries in the beautiful Silicon Valley Computer are useless.They can only give answers. Pablo Picasso __ Do you Yahoo!? Exclusive Video Premiere - Britney Spears http://launch.yahoo.com/promos/britneyspears/
Re: Perpetuating Myths about the zSeries
On Tuesday, 10/28/2003 at 01:30 PST, Jim Sibley [EMAIL PROTECTED] wrote: Its like the old TSO is slow myth vs CMS. In the few years of the of the s/360, TSO was slow and a lot of products tried to replace it (ROSCOE, etc). Once TSO got improved, the myth persisted. Yes, but in this case Everyone Knows it's true! :-) The performance literature for Linux is way behind what its real capabilites are today. And here we agree. Bringing BogusMIPS into the discussion was like throwing a mouse in front of a cat. You distracted us from your real point: That things are better now than they used to be. Alan Altmark Sr. Software Engineer IBM z/VM Development
Re: Perpetuating Myths about the zSeries
First of all, it was not a myth that TSO was slow when compared to CMS. And i'm not religious about CMS vs TSO. Second, i'd really like a concrete example of what performance literature is way behind for Linux. There were two redbooks this year that looked at many performance issues. If anything, they were productive in finding performance issues that needed to be addressed. From: Jim Sibley [EMAIL PROTECTED] Its like the old TSO is slow myth vs CMS. In the few years of the of the s/360, TSO was slow and a lot of products tried to replace it (ROSCOE, etc). Once TSO got improved, the myth persisted. The performance literature for Linux is way behind what its real capabilites are today. = Jim Sibley If you can't measure it, I'm Just NOT interested!(tm) // Barton Robinson - CBW Internet: [EMAIL PROTECTED] Velocity Software, IncMailing Address: 196-D Castro Street P.O. Box 390640 Mountain View, CA 94041 Mountain View, CA 94039-0640 VM Performance Hotline: 650-964-8867 Fax: 650-964-9012 Web Page: WWW.VELOCITY-SOFTWARE.COM //
Re: Perpetuating Myths about the zSeries
Jim said: Its like the old TSO is slow myth vs CMS. In the few years of the of the s/360, TSO was slow and a lot of products tried to replace it (ROSCOE, etc). Once TSO got improved, the myth persisted. In a shop with heavy use of both VM (CMS) and MVS one could gather evidence from objective comparison ... but not all of us live in such a shop. Shucks. So again, the myth persists, as do reports countering it. I'll tell ya an amazing myth: that the mainframe cannot sustain a high interrupt rate. I am still annoyed at the byte-at-a-time nature of interactive Unix, but I had to confess a decade ago, when exposed to AIX/370, that it doesn't kill your system. Ahh... the Cornell days. -- R;