Hi Somnath,

I have just tried with 2 rbd volumes with (rbd map -o noshare rbdvolume -p 
pool) (kernel 3.14),
then a fio benchmark on both volumes at the same time
but I don't seem to help.

I have always the kworker process at 100%, and iops are 25000iops on each rbd 
volume.

----- Mail original ----- 

De: "Somnath Roy" <[email protected]> 
À: "Alexandre DERUMIER" <[email protected]>, "Christoph Hellwig" 
<[email protected]> 
Cc: "Ceph Devel" <[email protected]> 
Envoyé: Dimanche 26 Octobre 2014 20:08:42 
Objet: RE: krbd blk-mq support ? 

Alexandre, 
Have you tried mapping different images on the same m/c with 'noshare' map 
option ? 
If not, it will not scale with increasing number of images (and thus mapped 
rbds) on a single m/c as they will share the same connection to cluster. 

Thanks & Regards 
Somnath 

-----Original Message----- 
From: [email protected] 
[mailto:[email protected]] On Behalf Of Alexandre DERUMIER 
Sent: Sunday, October 26, 2014 6:46 AM 
To: Christoph Hellwig 
Cc: Ceph Devel 
Subject: Re: krbd blk-mq support ? 

Hi, 

some news: 

I have applied patches succefully on top of 3.18-rc1 kernel. 

But don't seem to help is my case. 
(I think that blk-mq is working because I don't see any io schedulers on rbd 
devices, as blk-mq don't support them actually). 

My main problem is that I can't reach more than around 50000iops on 1 machine, 

and the problem seem to be the kworker process stuck at 100% of 1core. 

I had tried multiple fio process, on differents rbd devices at the same time, 
and I'm always limited à 50000iops. 

I'm sure that the ceph cluster is not the bottleneck, because if I launch 
another fio on another node at the same time, 

I can reach 50000iops on each node, and both are limited by the kworker 
process. 


That's why I thinked that blk-mq could help, but it don't seem to be the case. 


Is this kworker cpu limitation a known bug ? 

Regards, 

Alexandre 

----- Mail original ----- 

De: "Alexandre DERUMIER" <[email protected]> 
À: "Christoph Hellwig" <[email protected]> 
Cc: "Ceph Devel" <[email protected]> 
Envoyé: Vendredi 24 Octobre 2014 14:27:47 
Objet: Re: krbd blk-mq support ? 

>>If you're willing to experiment give the patches below a try, not that 
>>I don't have a ceph test cluster available, so the conversion is 
>>untestested. 

Ok, Thanks ! I'll try them and see If I can improve qemu performance on a 
single drive with multiqueues. 

----- Mail original ----- 

De: "Christoph Hellwig" <[email protected]> 
À: "Alexandre DERUMIER" <[email protected]> 
Cc: "Ceph Devel" <[email protected]> 
Envoyé: Vendredi 24 Octobre 2014 12:55:01 
Objet: Re: krbd blk-mq support ? 

If you're willing to experiment give the patches below a try, not that I don't 
have a ceph test cluster available, so the conversion is untestested. 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the 
body of a message to [email protected] More majordomo info at 
http://vger.kernel.org/majordomo-info.html 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the 
body of a message to [email protected] More majordomo info at 
http://vger.kernel.org/majordomo-info.html 

________________________________ 

PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies). 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to