Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-03-17 Thread Rainer Krienke
Hello, thanks for your answer, if I understand you correctly, than iothreads can only help if the VM has more than one disk, hence your proposal to build a raid0 on two rbd devices. The disadvantage of this solution would of course be that disk usage would be doubled. A fileserver VM I manage

Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-03-17 Thread Rainer Krienke
Hello Alwin, thank you for your reply. The test VMs config is this one. It only has the system disk as well a disk I added for my test writing on the device with dd: agent: 1 bootdisk: scsi0 cores: 2 cpu: kvm64 ide2: none,media=cdrom memory: 4096 name: pxaclient1 net0:

[PVE-User] Proxmox with ceph storage VM performance strangeness

2020-03-17 Thread Rainer Krienke
Hello, I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic disks. The pool with rbd images for disk storage is erasure coded with a 4+2 profile. I ran some performance tests since I noticed that there seems to

Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-03-17 Thread Alwin Antreich
Hallo Rainer, On Tue, Mar 17, 2020 at 02:04:22PM +0100, Rainer Krienke wrote: > Hello, > > I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb > Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic > disks. The pool with rbd images for disk storage is erasure

Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-03-17 Thread Eneko Lacunza
Hi, You can try to enable IO threads and assign multiple Ceph disks to the VM, then build some kind of raid0 to increase performance. Generally speaking, a SSD based Ceph cluster is considered to perform well when a VM gets about 2000 IOPS, and factors like CPU 1-thread performance, network

Re: [PVE-User] [SPAM] Proxmox with ceph storage VM performance strangeness

2020-03-17 Thread Ralf Storm
Hello Rainer, same issue here 7nodes Cluster with Proxmox and Ceph on the same nodes seperate 10gb for Ceph and 10gb for VMs, not erasure coded, about 50 ssd. Performance for Backups, recovery etc. is almost 1000MByte/s, several vms accessing data at the same time is raising the perfomance

Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-03-17 Thread Alexandre DERUMIER
>>What rates do you find on your proxmox/ceph cluster for single VMs? with replicat x3 and 4k block random read/write with big queue depth, I'm around 7iops read && 4iops write (by vm disk if iothread is used, the limitation is cpu usage of 1 thread/core by disk) with queue depth=1,

Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-03-17 Thread Alwin Antreich
On Tue, Mar 17, 2020 at 05:07:47PM +0100, Rainer Krienke wrote: > Hello Alwin, > > thank you for your reply. > > The test VMs config is this one. It only has the system disk as well a > disk I added for my test writing on the device with dd: > > agent: 1 > bootdisk: scsi0 > cores: 2 > cpu:

Re: [PVE-User] UIDs > 65535 not valid in container

2020-03-17 Thread Frank Thommen
Dear all, On 13.03.20 14:13, Frank Thommen wrote: On 3/12/20 7:58 PM, Frank Thommen wrote: On 3/12/20 5:57 PM, Dietmar Maurer wrote: I fear this might be a container-related issue but I don't understand it and I don't know if there is a solution or a workaround. Any help or hint is highly

Re: [PVE-User] UIDs > 65535 not valid in container

2020-03-17 Thread Dietmar Maurer
> Does anyone have an assessment of the risk we would run? I still don't > understand the security implications of the mapping of higher UIDs. > However this is quickly becoming a major issue for us. The risk is that it is not supported by us. Thus, we do not test that and I do not know what

Re: [PVE-User] UIDs > 65535 not valid in container

2020-03-17 Thread Frank Thommen
On 17.03.20 09:33, Dietmar Maurer wrote: Does anyone have an assessment of the risk we would run? I still don't understand the security implications of the mapping of higher UIDs. However this is quickly becoming a major issue for us. The risk is that it is not supported by us. Thus, we do