On Wed, 2005-07-13 at 10:45 -0300, Robin Murray wrote: > It's not that difficult to create a reliable box when it's only running > one or two static applications. Under those conditions it *better* be > reliable. otherwise it's a mere toy. A mainframe is a different beast, > offering many, many diverse applications to thousands of users all on one > box. I've yet to see unix or windows accomplish the same thing. You have > to compare apples to apples when talking about reliability.
No, actually, I don't. I take your point that a single static application is likely to be more reliable than a diverse collection of evolving applications. I stipulated that my two-year uptime was for a limited application on a dedicated box. (It's worthwhile mentioning that I'd stripped, hardened, chroot'ed and firewalled the becrap out of it, too.) But I can run other single applications on ~their~ own dedicated boxes, growing my portfolio until I have a cluster of applications taking up a handful of 19-inch racks. If one server goes down, the rest stay lit, so that reduces my business exposure in the event of a failure. To improve AVAILABILITY, I add failover servers. My costs dramatically increase, along with the complexity of the installation, but my users see the same uptime as they would have with Big Iron in place. (That might be "thousands of users". You are aware that Yahoo and Google run on *nix servers, right? Lots and lots of them.) The *nix and commodity hardware crowd knows about RAID and network attached storage. They know about failover and transaction recovery. They know about load balancing. They know about DBMSs and journaled filesystems. They don't have CPU recovery yet (so far as I know), and they have no equivalent to parallel sysplex. But they've made significant strides in the past few years, make no mistake. "Objects in mirror are closer than they appear." Hardware costs will continue to decline for the upstarts, as virtualization becomes more popular. VMWare works well, and there are free hacks for Linux such as UML or XEN that allow you to run multiple kernel images on a single box. This all leads to laughable, ridiculous startup costs for a new application prototype. Sure, by the time you build a mainframe you'll have spent quite a chunk of lucre, not to mention the staff you need to manage all those virtual images and NAS devices and ancillary support structure. But you can start cheaply, and business loves that. I might go to management and propose a million-dollar mainframe implementation for an important new division, and someone else might go to management with a hundred-thousand-dollar proposal for a *nix prototype that could be scaled up over time (at significant cost). Which proposal do you think is going to win? My cheaper-in-the-long-run solution, or my competitor's cheaper-to-start solution? Most pointy haired bosses can't see past the current fiscal year. My rambling point in all this is that you ~can~ build something approaching Big Iron availability with today's commodity iron and Free/Open Source operating systems. This will cost you big money to do it right, and it will cost you people (continuing expense for staffing, office space, training, retraining)... but it ~can~ be done, and it's falling-over-easy to get a project started. Yeah, I can compare apples and oranges. -- David Andrews A. Duda and Sons, Inc. [EMAIL PROTECTED] ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html

