Hi Paul,

Am 14. Juni 2018 00:33:09 MESZ schrieb Paul Emmerich <paul.emmer...@croit.io>:
>2018-06-13 23:53 GMT+02:00 <c...@elchaka.de>:
>
>> Hi yao,
>>
>> IIRC there is a *sleep* Option which is usefull when delete Operation
>is
>> being done from ceph.... sleep_trim or something like that.
>>
>
>you are thinking of "osd_snap_trim_sleep" which is indeed a very
>helpful
>option - but not for deletions.
>It rate limites snapshot deletion only.
>
Yes that is what i have meant :)

So there isnt a way to throttle normal delete like this?

- Mehmet  

>Paul
>
>
>>
>> - Mehmet
>>
>> Am 7. Juni 2018 04:11:11 MESZ schrieb Yao Guotao
><yaoguo_...@163.com>:
>>>
>>> Hi Jason,
>>>
>>> Thank you very much for your reply.
>>> I think the RBD trash is a good way. But, the QoS in Ceph is a
>better
>>> solution.
>>> I am looking forward to the backend QoS of Ceph.
>>>
>>> Thanks.
>>>
>>>
>>> At 2018-06-06 21:53:23, "Jason Dillaman" <jdill...@redhat.com>
>wrote:
>>> >The 'rbd_concurrent_management_ops' setting controls how many
>>> >concurrent, in-flight RADOS object delete operations are possible
>per
>>> >image removal. The default is only 10, so given ten 10 images being
>>> >deleted concurrently, I am actually surprised that blocked all IO
>from
>>> >your VMs.
>>> >
>>> >Adding support for limiting the maximum number of concurrent image
>>> >deletions would definitely be an OpenStack enhancement. There is an
>>> >open blueprint for optionally utilizing the RBD trash instead of
>>> >having Cinder delete the images [1], which would allow you to defer
>>> >deletions to whenever is convenient. Additionally, once Ceph adds
>>> >support for backend QoS (fingers crossed in Nautilus), we can
>change
>>> >librbd to flag all IO for maintenance activities to background
>(best
>>> >effort) priority, which might be the best long-term solution.
>>> >
>>> >[1]
>https://blueprints.launchpad.net/cinder/+spec/rbd-deferred-volume-deletion
>>> >
>>> >On Wed, Jun 6, 2018 at 6:40 AM, Yao Guotao <yaoguo_...@163.com>
>wrote:
>>> >> Hi Cephers,
>>> >>
>>> >> We use Ceph with Openstack by librbd library.
>>> >>
>>> >> Last week, my colleague delete 10 volumes from Openstack
>dashboard at the
>>> >> same time, each volume has about 1T used.
>>> >> During this time, the disk of OSDs are busy, and there have no
>I/O for
>>> >> normal vm.
>>> >>
>>> >> So, I want to konw if there are any parameters that can be set to
>throttle?
>>> >>
>>> >> I find a parameter about rbd op is
>'rbd_concurrent_management_ops'.
>>> >> I am trying to figure out how it works in code, and I find the
>parameter can
>>> >> only control the asyncchronous deletion of all objects of an
>image.
>>> >>
>>> >> Besides, Should it be controlled at Openstack Nova or Cinder
>layer?
>>> >>
>>> >> Thanks,
>>> >> Yao Guotao
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> _______________________________________________
>>> >> ceph-users mailing list
>>> >> ceph-users@lists.ceph.com
>>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >>
>>> >
>>> >
>>> >
>>> >--
>>> >Jason
>>>
>>>
>>>
>>>
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to