Hi Paul,
Am 14. Juni 2018 00:33:09 MESZ schrieb Paul Emmerich :
>2018-06-13 23:53 GMT+02:00 :
>
>> Hi yao,
>>
>> IIRC there is a *sleep* Option which is usefull when delete Operation
>is
>> being done from ceph sleep_trim or something like that.
>>
>
>you are thinking of "osd_snap_trim_sleep"
2018-06-13 23:53 GMT+02:00 :
> Hi yao,
>
> IIRC there is a *sleep* Option which is usefull when delete Operation is
> being done from ceph sleep_trim or something like that.
>
you are thinking of "osd_snap_trim_sleep" which is indeed a very helpful
option - but not for deletions.
It rate limi
Hi yao,
IIRC there is a *sleep* Option which is usefull when delete Operation is being
done from ceph sleep_trim or something like that.
- Mehmet
Am 7. Juni 2018 04:11:11 MESZ schrieb Yao Guotao :
>Hi Jason,
>
>
>Thank you very much for your reply.
>I think the RBD trash is a good way.
Hi Jason,
Thank you very much for your reply.
I think the RBD trash is a good way. But, the QoS in Ceph is a better solution.
I am looking forward to the backend QoS of Ceph.
Thanks.
At 2018-06-06 21:53:23, "Jason Dillaman" wrote:
>The 'rbd_concurrent_management_ops' setting controls how m
The 'rbd_concurrent_management_ops' setting controls how many
concurrent, in-flight RADOS object delete operations are possible per
image removal. The default is only 10, so given ten 10 images being
deleted concurrently, I am actually surprised that blocked all IO from
your VMs.
Adding support fo
Hi Cephers,
We use Ceph with Openstack by librbd library.
Last week, my colleague delete 10 volumes from Openstack dashboard at the same
time, each volume has about 1T used.
During this time, the disk of OSDs are busy, and there have no I/O for normal
vm.
So, I want to konw if there are an