I think like all things, there's what the vendor wants you to do, and what makes sense, and they don't always match exactly. We're getting more "big iron" boxes w/ virtualization nowadays, but by big iron I'm referring to 8-core (intel or amd) boxes w/ 32-64GB of RAM that we're paying $5-10k on each, running (mostly) free Xen on, and using to launch new apps or consolidate and save on existing costs like power, space, etc. (people/administration/management costs, too). We still use 1u fewer-cores/less-ram servers, we just have more capacity now and we can grow more easily. We also use virtualization for dev, qa, etc. in a lot of cases because it's easy to spin up a new "server" when you can do it virtually, and to let people mess it up, and then just spin up a new one when they need it clean.
In our case, we did end up tossing some of our old machines, but that wasn't about virtualization but data center costs. We're in NYC, and power and space are at a premium, to the point that high-5 and low-6 figure hardware purchases can often pay for themselves if they achieve better bang for the buck (computations per amp, per u, etc.). For instance, we just reduced our cage size from ~35 racks to ~15 racks, while increasing capacity, by replacing servers that were only 2-3 years old with brand new low-voltage servers. 95% of that was new hardware, 5% was virtualization onto "big iron" (and again, note the prices above). Most of the time, when you have well-utilized servers, virtualization as a consolidation play doesn't make a lot of sense. We fit into the over-100 server range you mentioned, and we need those for capacity, so trying to virtualize when many of our servers are close to 100% utilization at peak is silly. It didn't make sense for most of our servers, but we're happy with it where we're using it. (We do use VMWare, but in a more limited fashion (mostly Windows-only) and again, our "big iron" boxes cost less than $10k, so it's not a huge hit. The concept of running VMWare on hundreds of physical boxes and doing minimal consolidation seems silly to me in most cases. The one place we use virtualization without much consolidation is when we want the ability to be able to fail over a live running box from one piece of hardware to another - in some cases, we've added an extra layer of reliability by having previously physical boxes be virtualized. But even in those cases, we often have one or two members of the cluster remain physical, just in case something happens to our shared storage, or there are some weird networking issues, or whatever.) Nicholas On Mon, Sep 14, 2009 at 3:12 PM, Lamont Granquist <[email protected]>wrote: > > Here we were back in 2001-2003 buying up cheap 1U dual proc ~4GB RAM > server with a 1-4 drives in them and putting them all over the datacenter. > It all made a lot of sense in the push to having lots of smaller, cheaper > components. > > Now with virtualization it seems we're back to buying big Iron again. I'm > seeing a lot more 128GB RAM 8-core servers with 4 GigE drops and FC > attatched storage to SANs. > > Has anyone really costed this out to figure out what makes sense here? > > An awful lot of our virtualization needs at work *could* be solved simply > by taking some moderately sized servers (we have lots of 16GB 4-core > servers lying around) and chopping them up into virts and running all the > apps that we have 4 copies of that do *nothing at all*. Lots of the apps > we have require 1% CPU, 1% I/O, 1% network bandwidth and maybe an image > with 512MB-1GB or RAM (and *never* peak above that) -- and I thought the > idea behind virtualization was to take existing hardware and just be more > efficient with it. > > Instead I'm seeing those apps moved onto BigIron and BigStorage with an > awful lot of new capex and licensing spending on VMware. So where, > exactly are the cost savings? > > So, did I just turn into a dinosaur in the past 5 years and IT has moved > entirely back to large servers and expensive storage -- or can someone > make sense of the current state of commodity vs. BigIron for me? > > It definitely seems absurd that the most efficient way to buy CPU these > days is 8-core servers when there's so many apps that only use about 1% of > a core that we have to support. Without virtualization that becomes a > huge waste. In order to utilize that CPU efficiently, you need to run > many smaller images. Because of software bloat, you need big RAM servers > now. Then when you look at the potentially bursty I/O needs of the > server, you go with expensive storage arrays to get the IOPS and require > fibre channel, and now that you have the I/O to drive the network, you > need 4+ GigE drops per box. > > At the same time, throwing away all the 2006-vintage 2-core 16GB servers > and replacing them with all this capex on BigIron seems like it isn't > saving much money... Has anyone done a careful *independent* performance > analysis of what their applications are actually doing (for a large > web-services oriented company with ~100 different app farms or so) and > looked at the different costing models and what performance you can get > out of virtualization? > _______________________________________________ > Tech mailing list > [email protected] > http://lopsa.org/cgi-bin/mailman/listinfo/tech > This list provided by the League of Professional System Administrators > http://lopsa.org/ >
_______________________________________________ Tech mailing list [email protected] http://lopsa.org/cgi-bin/mailman/listinfo/tech This list provided by the League of Professional System Administrators http://lopsa.org/
