May I ask why you are using krbd with QEMU instead of librbd?

On Fri, Jun 16, 2017 at 12:18 PM, 码云 <[email protected]> wrote:
> Hi All,
> Recently.I meet a question and I did'nt find out any thing for explain it.
>
> Ops process like blow:
> ceph 10.2.5  jewel, qemu 2.5.0  centos 7.2 x86_64
> create pool  rbd_vms  3  replications with cache tier pool 3 replication
> too.
> create 100 images in rbd_vms
> rbd map 100 image to local device, like  /dev/rbd0  ... /dev/rbd100
> dd if=/root/win7.qcow2  of=/dev/rbd0 bs=1M count=3000
> virsh define 100 vms(vm0... vm100), 1 vms  configured 1 /dev/rbd .
> virsh start  100 vms.
>
> when the 100 vms start concurrence, will caused some vms hang.
> when do fio testing in those vms, will casued some vms hang .
>
> I checked ceph status ,osd status , log etc.  all like same as before.
>
> but check device with  iostat -dx 1,   some  rbd* device are  strange.
> util% are 100% full, but  read and wirte count all are zero.
>
> i checked virsh log, vms log etc, but not found any useful info.
>
> Can help to fingure out some infomartion.  librbd krbd or other place is
> need to adjust some arguments?
>
> Thanks All.
>
> ------------------
> 王勇
> 上海德拓信息技术股份有限公司-成都研发中心
> 手机:15908149443
> 邮箱:[email protected]
> 地址:四川省成都市天府大道666号希顿国际广场C座1409
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Jason
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to