On Mon, Sep 14, 2009 at 1:49 PM, <[email protected]> wrote:

> On Mon, 14 Sep 2009, Lamont Granquist wrote:
>
> > Here we were back in 2001-2003 buying up cheap 1U dual proc ~4GB RAM
> > server with a 1-4 drives in them and putting them all over the
> datacenter.
> > It all made a lot of sense in the push to having lots of smaller, cheaper
> > components.
> >
> > Now with virtualization it seems we're back to buying big Iron again.
>  I'm
> > seeing a lot more 128GB RAM 8-core servers with 4 GigE drops and FC
> > attatched storage to SANs.
> >
> > Has anyone really costed this out to figure out what makes sense here?
>
> mostly no ;-) a lot of virtualization is done 'because you can'
>
> but keep in mind that a 8 core server is only 2 sockets nowdays, the same
> as those dual proc systems were.
>
> your 128G 8-core server can still be a 1u box, and it's cost is very
> similar to the 4G 2-core system you purchased in 2001. the FC drops vs
> local disks are different, but that boils down to virtualizeing the
> drives.
>
> don't forget that 6-core per chip processors are out now, and 8-core per
> chip processors are expected within a year.
>
> so at that point your cheap 1u box will probably be a 16 core box.
>
> > An awful lot of our virtualization needs at work *could* be solved simply
> > by taking some moderately sized servers (we have lots of 16GB 4-core
> > servers lying around) and chopping them up into virts and running all the
> > apps that we have 4 copies of that do *nothing at all*.  Lots of the apps
> > we have require 1% CPU, 1% I/O, 1% network bandwidth and maybe an image
> > with 512MB-1GB or RAM (and *never* peak above that) -- and I thought the
> > idea behind virtualization was to take existing hardware and just be more
> > efficient with it.
> >
> > Instead I'm seeing those apps moved onto BigIron and BigStorage with an
> > awful lot of new capex and licensing spending on VMware.  So where,
> > exactly are the cost savings?
>
> many of those systems are due for retirement anyway. also the new systems
> are more efficiant in terms of space and power, so cost less in the
> datacetner to run (for many datacenters, the cost of the datacenter space,
> power, and cooling is as much or more than the cost of the servers you are
> running there. so if you can shrink those requirements you can save cash)
>
> > So, did I just turn into a dinosaur in the past 5 years and IT has moved
> > entirely back to large servers and expensive storage -- or can someone
> > make sense of the current state of commodity vs. BigIron for me?
>
> what used to be big iron is today's commodity hardware.
>
> today's big iron is 64+ cores with corresponding amounts of ram
>
> > It definitely seems absurd that the most efficient way to buy CPU these
> > days is 8-core servers when there's so many apps that only use about 1%
> of
> > a core that we have to support.  Without virtualization that becomes a
> > huge waste.  In order to utilize that CPU efficiently, you need to run
> > many smaller images.  Because of software bloat, you need big RAM servers
> > now.  Then when you look at the potentially bursty I/O needs of the
> > server, you go with expensive storage arrays to get the IOPS and require
> > fibre channel, and now that you have the I/O to drive the network, you
> > need 4+ GigE drops per box.
>
> and this is the justification for virtualization
>
> > At the same time, throwing away all the 2006-vintage 2-core 16GB servers
> > and replacing them with all this capex on BigIron seems like it isn't
> > saving much money...  Has anyone done a careful *independent* performance
> > analysis of what their applications are actually doing (for a large
> > web-services oriented company with ~100 different app farms or so) and
> > looked at the different costing models and what performance you can get
> > out of virtualization?
>
> the thing to remember is that the benifits (or lack of benifits) of
> virtualization are going to vary greatly depending on your workload and
> applications.
>
> there are a lot of studies out there, but they are almost always from the
> virtualization companies.
>
> I've seen a lot of IT managers jump on the virtualizaiton bandwagon, but
> then when forced to defend their numbers have been unable to do so.
>
> one thing that a lot of people initially see is 'hey, we are reducing the
> number of servers that we run, so we will also save a huge amount of
> sysadmin time', what they don't recognize is that you have the same number
> of systems to patch (plus the new host systems that you didn't have
> before), so unless you implement management software at the same time your
> time requriements go up, not down.
>
> David Lang
>


I'm not sure anyone is running vmware on "Big Iron" servers.  It does not
run on mainframes and mini's.  It runs on commodity Intel/AMD servers.  It
happens that these "commodity" servers are getting larger but that has been
the case throughout the history of in PC.  I run 6GB of ram on my personal
computer at home and make use of that ram on a regular basis as well as the
4 cores in the box.  5 years ago the same computing horsepower was almost
exclusively in datacenter environments and more than likely would have fit
what I perceive as your definition of Big Iron.

Part of the savings comes from putting 15-30 virtual servers in the space
that you would fit 4 pizza box physical servers.  You get even more density
by going to a blade environment for your virtual cluster.  We are getting
several hundered "servers" per 8U of rack space.  Once we have fully
populated our blade farm I expect to see a couple of thousand "servers in a
16U pair of blade enclosures.

Now the power and cooling are a little harder to quantify but at a 15 to 1
ratio there is going to be a fair amount of savings on both cooling and
power even with the considerably larger server.

If your strickly talking about 1% utilized servers you should see many more
than 15 guests on a 4x4core vmware server with 128GB+ of ram.

There are other benefits that are not frequently discussed.  If the majority
of your systems are SAN attached you more than likely have at least a pair
of HBAs in each one.  Migrate that to VMware and you now have 4-8 HBAs for
those 15 servers rather than 30+ HBA for the equivalent physical systems.
Also not everyone is running their vmware farms on the highest end EMC DMX
or Hitachi SAN backend.  There are a lot of successful smaller installations
running with NFS or ISCSI as the backend storage.  If you have low disk IO
requirements and a stable well run network infrastructure that can be pretty
close to and sometimes cheaper than DAS drives.


Vmware is not cheap until you reach a decent density per host.  But when
each new row of racks costs a million it becomes VERY cheap VERY fast.  And
Vmware is no longer the only viable virtual option in the commodity hardware
realm any longer.  Just the most mature with the largest 3rd party tools
support and install base.

Charlie


-- 

Marie von 
Ebner-Eschenbach<http://www.brainyquote.com/quotes/authors/m/marie_von_ebnereschenbac.html>
- "Even a stopped clock is right twice a day."
_______________________________________________
Tech mailing list
[email protected]
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to