Here we were back in 2001-2003 buying up cheap 1U dual proc ~4GB RAM 
server with a 1-4 drives in them and putting them all over the datacenter. 
It all made a lot of sense in the push to having lots of smaller, cheaper 
components.

Now with virtualization it seems we're back to buying big Iron again.  I'm 
seeing a lot more 128GB RAM 8-core servers with 4 GigE drops and FC 
attatched storage to SANs.

Has anyone really costed this out to figure out what makes sense here?

An awful lot of our virtualization needs at work *could* be solved simply 
by taking some moderately sized servers (we have lots of 16GB 4-core 
servers lying around) and chopping them up into virts and running all the 
apps that we have 4 copies of that do *nothing at all*.  Lots of the apps 
we have require 1% CPU, 1% I/O, 1% network bandwidth and maybe an image 
with 512MB-1GB or RAM (and *never* peak above that) -- and I thought the 
idea behind virtualization was to take existing hardware and just be more 
efficient with it.

Instead I'm seeing those apps moved onto BigIron and BigStorage with an 
awful lot of new capex and licensing spending on VMware.  So where, 
exactly are the cost savings?

So, did I just turn into a dinosaur in the past 5 years and IT has moved 
entirely back to large servers and expensive storage -- or can someone 
make sense of the current state of commodity vs. BigIron for me?

It definitely seems absurd that the most efficient way to buy CPU these 
days is 8-core servers when there's so many apps that only use about 1% of 
a core that we have to support.  Without virtualization that becomes a 
huge waste.  In order to utilize that CPU efficiently, you need to run 
many smaller images.  Because of software bloat, you need big RAM servers 
now.  Then when you look at the potentially bursty I/O needs of the 
server, you go with expensive storage arrays to get the IOPS and require 
fibre channel, and now that you have the I/O to drive the network, you 
need 4+ GigE drops per box.

At the same time, throwing away all the 2006-vintage 2-core 16GB servers 
and replacing them with all this capex on BigIron seems like it isn't 
saving much money...  Has anyone done a careful *independent* performance 
analysis of what their applications are actually doing (for a large 
web-services oriented company with ~100 different app farms or so) and 
looked at the different costing models and what performance you can get 
out of virtualization?
_______________________________________________
Tech mailing list
[email protected]
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to