Thanks for tracking this down. It appears the libvirt needs to check
whether or not the fast-diff map is invalid before attempting to use
it. However, assuming the map is valid, I don't immediately see a
difference between the libvirt and "rbd du" implementation. Can you
provide a pastebin "debug rbd = 20" log dump for your 13 second case?

On Fri, Jan 4, 2019 at 2:19 AM Tomasz Płaza <[email protected]> wrote:
>
> Konstantin,
>
> Thanks for reply. I've managed to unravel it partially. Somehow (did not look 
> into srpm) starting from this version libvirt started to calculate real 
> allocation if fastdiff feature is present on image. Doing "rbd object-map 
> rebuild" on every image helped (do not know why it was needed - it is a new 
> cluster with ceph version 12.2.7).
>
> Now the only problem is 25T image on which "virsh vol-info" takes 13s (rbd du 
> takes 1s) compared to few minutes before, so the questions remains:
>
> - why it happened,
>
> - how to monitor/foresee this,
>
> - how to improve virsh vol-info if rbd du take less time to execute?
>
>
> On 03.01.2019 at 13:51, Konstantin Shalygin wrote:
>
> After update to CentOS 7.6, libvirt was updated from 3.9 to 4.5.
> Executing: "virsh vol-list ceph --details" makes libvirtd using 300% CPU
> for 2 minutes to show volumes on rbd. Quick pick at tcpdump shows
> accessing rbd_data.* which previous version of libvirtd did not need.
> Ceph version is 12.2.7.
>
> Any help will be appreciated
>
> There is nothing special in libvirt 4.5, I was upgraded hypervisors to this 
> version, still works flawless.
>
>
>
> k
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[1] 
https://github.com/libvirt/libvirt/blob/600462834f4ec1955a9a48a1b6b4a390b9c31553/src/storage/storage_backend_rbd.c#L386

-- 
Jason
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to