On Sunday, November 20, 2016 09:06:26 AM Rich Freeman wrote:
> On Sun, Nov 20, 2016 at 8:45 AM, Harry Putnam <[email protected]> wrote:
> > "J. Roeleveld" <[email protected]> writes:
> >> Also, overcommitting CPUs has a bad influence on performance,
> >> especially if the host wants to use all cores as well.
> > 
> > That is what I asked advice about.  What do you call
> > `overcommitting'.  For example with only 1 Vbox vm started and no
> > serious work being done by the windwos-10 os.  On an HP xw8600 with
> > older 2x Xeon 5.60 3.00Ghz with 32 GB ram
> 
> IMO over-committing CPU isn't actually THAT bad.  The CPU obviously
> gets divided n ways, but that's as far as it goes.  There isn't that
> much overhead switching between VMs (though there certainly is some).

True, it's not too bad. The problem is, however, that the host-OS is special 
as it has to manage access to the various resources. For this reason, I tend 
to dedicate specific CPU-cores to the host.

> Over-committing RAM on the other hand can definitely cause more
> serious issues, because then you're dealing with swap.  Dividing 1 CPU
> 3 ways gives you 1/3rd of a CPU (but collectively the 3 VMs are
> putting out close to a full CPU's worth of work).  If you're
> over-committing RAM and you go into swap, then the performance of all
> your hosts might drop considerably, adding up to WAY less than the
> total your box is capable of.

VMWare actually did this worse with their desktop version a few years ago. Not 
sure if they still do this.
What they did was to synchronize RAM with an on-disk copy. Only way to disable 
that was to edit the VM-config file using undocumented flags. I stopped using 
VMWare shortly after that.

> If your host is windows then this isn't an option for you (seriously,
> you should re-consider that), but if you could use a linux host
> another solution is containers.  In general they are FAR more flexible
> around RAM use and of course RAM tends to be the most precious
> commodity when you're running guests of any kind.  With a container
> you don't have to pre-allocate the RAM, so if Gentoo needs 8GB of RAM
> right now it isn't as big a deal because in 15min when you're done
> building it will go back down to 100MB or whatever it actually needs
> to run.

Containers are nice, I agree. Although I tend to see it more as a much better 
implementation of a chroot-jail.

> Back when I was running Gentoo VMs I would typically set the RAM use
> to something fairly minimal (think ~1GB or less).  Then when I was
> doing updates I'd up the setting to basically all the free RAM on my
> host and allocate multiple CPU cores to it, then mount a tmpfs on
> /var/tmp.  When I was done building I'd shrink it back down to a
> normal config.  And I wouldn't be doing builds on multiple hosts at
> once.

Or use a dedicated build-server/VM. And only install binary packages on the 
VMs. This has the benefit of speeding up updates on the VMs themselves.

> These days with containers I just run emerge on a few at a time and I
> don't worry about it (still with /var/tmp on a tmpfs in each).  Now, I
> wouldn't go building chromium and libreoffice in multiple containers
> at once that way, but for typical server-like guests very few packages
> use THAT much RAM.

That would be a good way to stress-test hardware though. :)
Especially to check if the cooling is working as expected.

--
Joost

Reply via email to