Hello,
thanks for your answer,
if I understand you correctly, than iothreads can only help if the VM
has more than one disk, hence your proposal to build a raid0 on two rbd
devices. The disadvantage of this solution would of course be that disk
usage would be doubled.
A fileserver VM I manage
Hello Alwin,
thank you for your reply.
The test VMs config is this one. It only has the system disk as well a
disk I added for my test writing on the device with dd:
agent: 1
bootdisk: scsi0
cores: 2
cpu: kvm64
ide2: none,media=cdrom
memory: 4096
name: pxaclient1
net0:
Hello,
I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb
Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic
disks. The pool with rbd images for disk storage is erasure coded with a
4+2 profile.
I ran some performance tests since I noticed that there seems to
Hallo Rainer,
On Tue, Mar 17, 2020 at 02:04:22PM +0100, Rainer Krienke wrote:
> Hello,
>
> I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb
> Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic
> disks. The pool with rbd images for disk storage is erasure
Hi,
You can try to enable IO threads and assign multiple Ceph disks to the
VM, then build some kind of raid0 to increase performance.
Generally speaking, a SSD based Ceph cluster is considered to perform
well when a VM gets about 2000 IOPS, and factors like CPU 1-thread
performance, network
Hello Rainer,
same issue here 7nodes Cluster with Proxmox and Ceph on the same nodes
seperate 10gb for Ceph and 10gb for VMs,
not erasure coded, about 50 ssd. Performance for Backups, recovery etc.
is almost 1000MByte/s, several vms accessing data at the same time is
raising the perfomance
>>What rates do you find on your proxmox/ceph cluster for single VMs?
with replicat x3 and 4k block random read/write with big queue depth, I'm
around 7iops read && 4iops write
(by vm disk if iothread is used, the limitation is cpu usage of 1 thread/core
by disk)
with queue depth=1,
On Tue, Mar 17, 2020 at 05:07:47PM +0100, Rainer Krienke wrote:
> Hello Alwin,
>
> thank you for your reply.
>
> The test VMs config is this one. It only has the system disk as well a
> disk I added for my test writing on the device with dd:
>
> agent: 1
> bootdisk: scsi0
> cores: 2
> cpu:
Dear all,
On 13.03.20 14:13, Frank Thommen wrote:
On 3/12/20 7:58 PM, Frank Thommen wrote:
On 3/12/20 5:57 PM, Dietmar Maurer wrote:
I fear
this might be a container-related issue but I don't understand it and I
don't know if there is a solution or a workaround.
Any help or hint is highly
> Does anyone have an assessment of the risk we would run? I still don't
> understand the security implications of the mapping of higher UIDs.
> However this is quickly becoming a major issue for us.
The risk is that it is not supported by us. Thus, we do not
test that and I do not know what
On 17.03.20 09:33, Dietmar Maurer wrote:
Does anyone have an assessment of the risk we would run? I still don't
understand the security implications of the mapping of higher UIDs.
However this is quickly becoming a major issue for us.
The risk is that it is not supported by us. Thus, we do
11 matches
Mail list logo