It's normal for them to be slower than other operations, but 18 minutes is *way* too long unless you're running on some very weak hardware (or using volumes in the terabytes, I guess?). In our systems, "slow" is considered about 2-5 seconds for a delete on an 80GB volume. That's only "slow" because create operations happen in a fraction of a second. You might want to strace the rbd client to see if you're not hitting something bad (max_files is the most likely limitation being hit during delete operations in my experience).
On Wed, Sep 24, 2014 at 1:29 PM, Abel Lopez <[email protected]> wrote: > This is expected behavior, unfortunately. > I spoke to the ceph guys about this last year. When you delete an ‘image’ > from a pool, the monitors (IIRC) don’t instantly know where all the segments > are across all the OSDs, so it takes a while to find/delete each one. > > On Sep 24, 2014, at 12:45 PM, Jonathan Proulx <[email protected]> wrote: > >> Hi All, >> >> Just started experimenting with RBD (ceph) back end for ephemeral >> storage on some of my compute nodes. >> >> I have it launching instances just fine, but when I try and delete >> them libvirt shows the instances are gone, but OpensStack lists them >> in 'deleting' state and the rbd process on the hypervisor spins madly >> at about 300% cpu ... >> >> ...and now approx 18min later they have finally fully terminated, why so >> long? >> >> -Jon >> >> _______________________________________________ >> OpenStack-operators mailing list >> [email protected] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > [email protected] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > _______________________________________________ OpenStack-operators mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
