>>And when I benchmark it I see some horribly-low performance and clear
>>bottleneck at ceph-osd process: it consumes about 110% of CPU and giving
>>me following results: 127 iops in fio benchmark (4k randwrite) for rbd
>>device, rados benchmark gives me ~21 IOPS and 76Mb/s (write).

on a 2x xeon 3,1ghz  10 cores (20 cores total), I can reach around 400000 iops 
4k read (80% total cpu), or 70000 iops 4k write (1x repli) 100% total cpu

this is with jemalloc, debug and cephx are disabled.


----- Mail original -----
De: "George Shuklin" <george.shuk...@gmail.com>
À: "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Mardi 28 Juin 2016 17:23:02
Objet: [ceph-users] CPU use for OSD daemon

Hello. 

I'm testing different configuration for Ceph. I found that osd are 
REALLY hungry for cpu. 

I've created a tiny pool with size 1 with single OSD made of fast intel 
SSD (2500-series), on old dell server (R210), Xeon E3-1230 V2 @ 3.30GHz. 

And when I benchmark it I see some horribly-low performance and clear 
bottleneck at ceph-osd process: it consumes about 110% of CPU and giving 
me following results: 127 iops in fio benchmark (4k randwrite) for rbd 
device, rados benchmark gives me ~21 IOPS and 76Mb/s (write). 

It this a normal CPU utilization for osd daemon for such tiny performance? 

Relevant part of the crush map: 

rule rule_fast { 
ruleset 1 
type replicated 
min_size 1 
max_size 10 
step take fast 
step chooseleaf firstn 0 type osd 
step emit 
} 
root fast2500 { 
id -17 
alg straw 
hash 0 # rjenkins1 
item pp7 weight 1.0 
} 

host pp7 { 
id -11 
alg straw 
hash 0 # rjenkins1 
item osd.5 weight 1.0 
} 


host pp7 { 
id -11 
alg straw 
hash 0 # rjenkins1 
item osd.5 weight 1.0 
} 

device 5 osd.5 
_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to