Re: [PVE-User] cores and threads Re: pve-user Digest, Vol 109, Issue 13

2017-04-18 Thread Jeremy McCoy
Former VMware employee here. Most of the concepts should carry over to KVM:

Contrary to what you might think, in most cases, having less cores per VM in an 
over-provisioned system will lead to an increase in performance. This is 
because in order for a VM with more than one core to process anything (even a 
single core task), the guest has to wait for a clock cycle to become available 
on all its cores at once. Many guests will panic or behave strangely if cores 
are allowed to process out of order. Furthermore, if one or more other guests 
with less cores are also queuing up tasks and can fit into a smaller number of 
cores, they may get priority and cause the higher-core guest to wait longer 
since they will be a more efficient use of CPU time from the host's 
perspective. This is referred to as co-stop, and is one of several silent 
killers of performance in a hypervisor.

Hyperthreading (or fake cores, as I lovingly refer to them), in my opinion, 
should never get passed to guests on a hypervisor, especially if the host is 
overprovisioned. The host can make better use of hyperthreading features than 
the guests can, and a guest that queues up work for more cores than there 
really are can lead to an even worse co-stop than what would normally be seen 
from just having too many multi-core VMs.

As for NUMA, you will be unlikely to see much performance differences on your 
hardware. The main purpose of NUMA is for multi-socket hosts that have 
guests/applications sharing memory between sockets. In that case, you will want 
to either set a guest to have cores that will fit within one socket (and its 
memory), or split the guest's cores/sockets/memory evenly between as few 
physical sockets as possible. The more physical sockets that are involved, the 
more processing latency is added due to memory copying over a slower bus when a 
process needs to access active memory from a neighboring NUMA node. For 
single-socket systems like yours, NUMA might enable a few features like 
dynamically adding RAM and cores to the guest, but not really have much impact 
on performance.

My advice would be to add more VMs with as few cores as possible and add a 
load-balancer like HAproxy where applicable. 

Again, that's the knowledge I gleaned for ESXi while doing performance tuning 
in large systems. If anyone has contrary information with respect to how KVM 
does things, I would be very interested to learn more.

 - Jeremy

- Original Message -
From: "Michael Peele" 
To: pve-user@pve.proxmox.com
Sent: Tuesday, April 18, 2017 11:07:19 AM
Subject: [PVE-User] cores and threads Re: pve-user Digest, Vol 109, Issue 13

The common thinking is do not assign more than N-1 cores to a VM (to allow
the VM to have a core).
If you have multiple VMs, then it will really vary a lot on what you can
assign.  You might try giving the high CPU one 4 cores, the moderate 3, and
the low ones 2 each.  Yes, that is 9.  You can oversubscribe, just don't
give one VM too many (N-1; in your case 7, though only 4 are real, which
makes it more complex.)

It is very complex with real cores, hyperthread cores, CPU load, memory
load, etc.

If a VM is maxing out, give it another core, if you want.  If a VM is very
low on CPU, maybe remove a core, but if it never uses much, it won't
matter.  Assigning cores is more about CPU prioritization and VM
prioritization.

Maybe you determine that DNS, while  normally having low usage, needs to
take some random DoS, so you give DNS 7 cores even though it normally only
needs 1.  Maybe the VM that has 100% CPU usage on 2 cores is a "low"
priority test system, so you won't give it more than 2 just because.

Thus, nobody can tell you what your combination of supply (ram, CPU, disk,
network), demand, and priorities are.

Regards.



On Sun, Apr 16, 2017 at 6:00 AM,  wrote:

> Send pve-user mailing list submissions to
> pve-user@pve.proxmox.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> or, via email, send a message with subject or body 'help' to
> pve-user-requ...@pve.proxmox.com
>
> You can reach the person managing the list at
> pve-user-ow...@pve.proxmox.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of pve-user digest..."
>
>
> Today's Topics:
>
>1. Re: cores and threads (Miguel González)
>
>
> --
>
> Message: 1
> Date: Sat, 15 Apr 2017 22:01:32 +0200
> From: Miguel González 
> To: "pve-user@pve.proxmox.com" 
> Subject: Re: [PVE-User] cores and threads
> Message-ID: 
> Content-Type: text/plain; charset=utf-8
>
> No one can help me out?
>
>
> On 04/13/17 4:55 PM, Miguel González wrote:
> > Hi,
> >
> >   I have a Proxmox 4 

[PVE-User] ZFS+GlusterFS LXC image creation trouble.

2016-06-10 Thread Jeremy McCoy
Hi all,

I am working on getting shared storage working on my hosts. My current issue is
that creating an LXC container on my GlusterFS mount is failing with the
following error:
>Warning, had trouble writing out superblocks.TASK ERROR: command 'mkfs.ext4 -O
>mmp -E 'root_owner=0:0' /mnt/gluster/images/102/vm-102-disk-1.raw' failed: exit
>code 144

Running that command manually on the host still generates the warning, but
otherwise successfully creates the image that I am able to mount. Is it
possible/safe to alter whatever script generates the container images so that it
does not give up here?

My first attempt at getting this working using the GUI to mount the storage
worked, but the performance was abysmal once I had several machines running
(idle) on it. I dug around in the GlusterFS documentation and various blog posts
about running GlusterFS on ZFS, and decided that manually mounting the storage
would give me better performance. 

My host config is at pastebin.com/cSQX2RDK, and the GlusterFS config is
different on each host so that the local brick is always type storage/posix. If
I have done something terribly wrong (which is entirely possible), please let me
know.

Thanks,
Jeremy
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph + ZFS?

2016-06-02 Thread Jeremy McCoy
Hi Lindsay,

Thanks for looking into that. I will see if I can get GlusterFS working. The
official documentation for Gluster seems pretty straightforward, but the Proxmox
wiki says little more than "Proxmox supports GlusterFS" as far as I can tell.
Would you mind please sharing the Proxmox-specific steps you took to get that
working?

Much appreciated,
 - Jeremy

> On June 1, 2016 at 7:39 PM Lindsay Mathieson 
> wrote:
> 
> On 2 June 2016 at 07:46, Lindsay Mathieson 
> wrote:
> 
> > However there is also the gluster fuse mount that proxmox automatic creates
> > (/mnt/pve/), you should be able to set that up as shared
> > directory storage and use that with lxc. I'll have a test of that myself
> > later today.
> 
> I can confirm that lxc images work fiune off the gluster fuse mount
> and they can be free migrated (once shutdown). I haven't tested but I
> imagine rhat fuse performance isn't as good as native gfapi.
> 
> Have you considered using kvm images? maybe slightly more overhead but
> you get live migration and snapshots and better disk performance with
> gluster.
> 
> -- 
> Lindsay
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Ceph + ZFS?

2016-06-01 Thread Jeremy McCoy
Hi all,

I am new here and am working on designing a Proxmox cluster. Just wondering if
anyone has tried doing Ceph on ZFS (instead of XFS on LVM or whatever pveceph
sets up) and, if so, how you went about implementing it. I have 4 hosts that
each have 1 spinner and 1 SSD to offer to the cluster. 

Are there any pitfalls to be aware of here? My goal is to mainly run LXC
containers (plus a few KVM VMs) on distributed storage, and I was hoping to take
advantage of ZFS's caching, compression, and data integrity features. I am also
open to doing GlusterFS or something else, but it looked like Proxmox does not
support LXC containers running on that yet.

Thanks,
Jeremy
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user