Lamont Granquist <[email protected]> writes:

> Here we were back in 2001-2003 buying up cheap 1U dual proc ~4GB RAM server
> with a 1-4 drives in them and putting them all over the datacenter.  It all
> made a lot of sense in the push to having lots of smaller, cheaper
> components.
>
> Now with virtualization it seems we're back to buying big Iron again.  I'm
> seeing a lot more 128GB RAM 8-core servers with 4 GigE drops and FC
> attatched storage to SANs.
>
> Has anyone really costed this out to figure out what makes sense here?
>
> An awful lot of our virtualization needs at work *could* be solved simply by
> taking some moderately sized servers (we have lots of 16GB 4-core servers
> lying around) and chopping them up into virts and running all the apps that
> we have 4 copies of that do *nothing at all*.  Lots of the apps we have
> require 1% CPU, 1% I/O, 1% network bandwidth and maybe an image with
> 512MB-1GB or RAM (and *never* peak above that) -- and I thought the idea
> behind virtualization was to take existing hardware and just be more
> efficient with it.

For what it is worth, I have found that "container" solutions often scale
better to the actual workload than "pretend hardware" virtualization does,
because it shares RAM between the containers much more easily for these tiny,
almost nothing, applications.

        Daniel

-- 
✣ Daniel Pittman            ✉ [email protected]            ☎ +61 401 155 707
               ♽ made with 100 percent post-consumer electrons
   Looking for work?  Love Perl?  In Melbourne, Australia?  We are hiring.

_______________________________________________
Tech mailing list
[email protected]
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to