Re: [ceph-users] Fwd: [ceph bad performance], can't find a bottleneck

2018-03-13 Thread Sergey Kotov
Hi, Maged Not a big difference in both cases. Performance of 4 nodes pool with 5x PM863a each is: 4k bs - 33-37kIOPS krbd 128 threads and 42-51kIOPS vs 1024 threads (fio numjobs 128-256-512) the same situation happens when we try to increase rbd workload, 3 rbd gets the same iops #. Dead end &

Re: [ceph-users] Fwd: [ceph bad performance], can't find a bottleneck

2018-03-12 Thread Maged Mokhtar
Hi, Try increasing the queue depth from default 128 to 1024: rbd map image-XX -o queue_depth=1024 Also if you run multiple rbd images/fio tests, do you get higher combined performance ? Maged On 2018-03-12 17:16, Sergey Kotov wrote: > Dear moderator, i subscribed to ceph list today,