The point people are trying to make here is that there is a level of hardware, 
staff expertise, configuration and operational management ease,
environmentals, et al that all come together to make an IT Infrastructure 
"Enterprise Class". You can have lots of the other ingredients, and if
you're lucky, get Enterprise Class computing on craptasticly cheap hardware. 
Most of the time you won't get lucky. In most IT arenas, even the
downtime of 'firing up the spare' is unacceptable. Many mission critical 
applications require 100% 24x7 uptime. No service disruption for hardware
changes or hardware failures. No service disruption for software maintenance 
(be that upgrades, bug fixes or OS patching or malware prevention) No
service disruption in event of power failure. How many institutions would 
scream bloody murder if their sales processing system went off line for 10
minutes for any reason? Would that 10 minutes cost 100,000 dollars in lost 
revenue? It might cost more depending on who you are.

At some point a corporate IT infrastructure will grow large enough, complex 
enough, or have enough demand placed on it that it MUST address these
issues. At that point the $299 celeron beige-wont-fit-in-a-rack box becomes 
foolish.  We're now at the point were there will be NO new servers that
aren't multi-processor, loaded with ram and running an enterprise level 
virtualization (be it on intel or z/Series). Why? Because if we don't we would
have to spend several million dollars on a new data center because we're out of 
room. To continue the way we have at our location - nobody shares a
server because they want application isolation - Must give way to 
virtualization on enterprise class hardware because the environmental 
considerations
alone DEMAND it.





             Alan Cox <[EMAIL PROTECTED]>
             Sent by: Linux on 390 Port
             <[email protected]>                                          
                                                                   To
                                                                     
[email protected]
                                                                                
                                                                   cc
             06/26/2006 01:08 PM
                                                                                
                                                              Subject
                                                                     Re: OPM 
zLinux Experience
                            Please respond to
               Linux on 390 Port <[email protected]>








Ar Llu, 2006-06-26 am 11:19 -0500, ysgrifennodd James Melin:
> Yes. Big reason. At what point does the box get overwhelmed by the rate of 
> data through the firewall and cause a network slowdown. At what point
will
> a single drive failure kill the box. What is the maximum sustainable data 
> rate for that 7200 RPM drive? There's a reason 10K and 15K RPM drives
exist.

Who cares, you buy ten. Thats the PC server mentality most businesses
operate with. "It blew up" is followed by "thats ok, there are six spare
ones in the cupboard".

Systems are essentially disposable. If you need ECC you pay a little bit
extra for the RAM, and "serious" data lives on the corporate file
servers not the PC systems.

Alan

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to