Lamont Granquist <[email protected]> writes: > Here we were back in 2001-2003 buying up cheap 1U dual proc ~4GB RAM > server with a 1-4 drives in them and putting them all over the datacenter. > It all made a lot of sense in the push to having lots of smaller, cheaper > components. > > Now with virtualization it seems we're back to buying big Iron again. I'm > seeing a lot more 128GB RAM 8-core servers with 4 GigE drops and FC > attatched storage to SANs. > > Has anyone really costed this out to figure out what makes sense here?
I have. And my competitors have as well. I use 32GiB ram/ 8 core servers. $1500 in parts, if you cheap out on local disk. As far as I can tell, this is also what my compitition uses. > Instead I'm seeing those apps moved onto BigIron and BigStorage with an > awful lot of new capex and licensing spending on VMware. So where, > exactly are the cost savings? > So, did I just turn into a dinosaur in the past 5 years and IT has moved > entirely back to large servers and expensive storage -- or can someone > make sense of the current state of commodity vs. BigIron for me? Uh, talk to an 'enterprise' salesman. My guess is that it is just another way for the salesguys to soak large companies. Shared storage is really, really nice. But local storage is pretty good, and is a small fraction of the cost. > It definitely seems absurd that the most efficient way to buy CPU these > days is 8-core servers when there's so many apps that only use about 1% of > a core that we have to support. Without virtualization that becomes a > huge waste. In order to utilize that CPU efficiently, you need to run > many smaller images. Because of software bloat, you need big RAM servers > now. Then when you look at the potentially bursty I/O needs of the > server, you go with expensive storage arrays to get the IOPS and require > fibre channel, and now that you have the I/O to drive the network, you > need 4+ GigE drops per box. the most efficent way to run servers now is to buy 32GiB/64GiB ram, pack it in a box with 2 quad core opterons, and a bunch of local disk. Shared storage makes things a lot easier, yes. But it's not required. If you need the iops, buy some decent SAS local disk. If you are like me / linode/ slicehost, well, buy SATA, and tell your customers to buy enough ram to cache the data they really need. > At the same time, throwing away all the 2006-vintage 2-core 16GB servers > and replacing them with all this capex on BigIron seems like it isn't > saving much money... Has anyone done a careful *independent* performance > analysis of what their applications are actually doing (for a large > web-services oriented company with ~100 different app farms or so) and > looked at the different costing models and what performance you can get > out of virtualization? Well, I can tell you what I'm doing, and I'm pretty sure linode does something fairly similar (though linode uses Intel CPUs, I'm pretty sure they still go 8 core/32GiB ram. I'm basing the 32GiB ram on an old post of Caker's and the intel CPUS off /proc/cpuinfo on a client's box.) My personal cutoff is 8GiB. if a box has less than that, move it into a virtual image one one of my super-cheap 32GiB/8 core boxes, and you'll save a bundle in power costs. your 2 core 16GiB ram servers, well, I /strongly reccomend/ dedicating a core to the master control system for doing I/O, if you virtualize. (the Dom0 in xen, which is what I'm familiar with) but that becomes harder when that takes you down to 1 cpu. How much power do the things eat? with low-power CPUs, my 8 core opteron systems eat around 2a of 120v Over the 3 year life of a piece of hardware, power costs end up being more than hardware costs, so I'm pretty quick to get rid of my old garbage. My power/cooling/rackspace costs for one of these servers is about $75/month. (of course, if you are paying 'enterprise salesman' rates for hardware, an 8 core, 32GiB ram rig is probably going to set you back $4-$6K, and while it probably comes with better disk, and ram that is 20% faster, that changes the balance. Power is expensive, but not _that_ expensive. In that case, often it makes sense to run hardware until it won't run any more. Personally I'd shoot the salesman and get on with buying cheap hardware, but apparently 'enterprises' don't like doing that sort of thing.) -- Luke S. Crawford http://prgmr.com/xen/ - Hosting for the technically adept http://nostarch.com/xen.htm - We don't assume you are stupid. _______________________________________________ Tech mailing list [email protected] http://lopsa.org/cgi-bin/mailman/listinfo/tech This list provided by the League of Professional System Administrators http://lopsa.org/
