>>AFAIK, it will be a challenge to get more that 2000 IOPS from one VM
>>using Ceph...

with iodetph=1, single queue, you'll have indeed the latency , and you 
shouldn't be to reach more than 4000-5000iops.
(depend mainly of cpu frequency on client + cpu frequency on cluster + network 

but with more parallel read/write, you should be able to reach 70-80000 iops 
without any problem by disk.
(if you need more, you can use multiple disks with iothreads, I was able to 
scale up to 500-600000 iops with 5-6 disks).

depend of your workload, you can enable writeback, it'll improve performance of 
sequential write of small coalesced blocks.
(it's regrouping them in bigger block before sending it to ceph.)
But currently (nautilus), enabling writeback slowdown read.

with octopus (actually in test 
it's solved, and you can always enabled writeback

octopus have also others optimisations, and writeback is able to regroup also 
random non coalesced blocks

See my last benchmarks:

Here some iops result with 1vm - 1disk -  4k block   iodepth=64, librbd, no 

                        nautilus-cache=none     nautilus-cache=writeback        
  octopus-cache=none     octopus-cache=writeback
randread 4k                  62.1k                     25.2k                    
        61.1k                     60.8k
randwrite 4k                 27.7k                     19.5k                    
        34.5k                     53.0k
seqwrite 4k                  7850                      37.5k                    
        24.9k                     82.6k

----- Mail original -----
De: "Eneko Lacunza" <elacu...@binovo.es>
À: "proxmoxve" <pve-user@pve.proxmox.com>
Envoyé: Mercredi 10 Juin 2020 08:30:08
Objet: Re: [PVE-User] CEPH performance

Hi Marco, 

El 9/6/20 a las 19:46, Marco Bellini escribió: 
> Dear All, 
> I'm trying to use proxmox on a 4 nodes cluster with ceph. 
> every node has a 500G NVME drive, with dedicated 10G ceph network with 
> 9000bytes MTU. 
> despite off nvme warp speed I can reach when used as lvm volume, as soon as I 
> convert it into a 4-osd ceph, performance are very very poor. 
> is there any trick to have ceph intro proxmox working fast? 
What is "very very poor"? What specs have the Proxmox nodes (CPU, RAM)? 

AFAIK, it will be a challenge to get more that 2000 IOPS from one VM 
using Ceph... 

How are you performing the benchmark? 


Zuzendari Teknikoa / Director Técnico 
Binovo IT Human Project, S.L. 
Telf. 943569206 
Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa) 

pve-user mailing list 

pve-user mailing list

Reply via email to