Hello there,
I'm trying to reduce recovery impact on client operations and using mclock
for this purpose. I've tested different weights for queues but didn't see
any impacts on real performance.

ceph version 12.2.8  luminous (stable)

Last tested config:
    "osd_op_queue": "mclock_opclass",
    "osd_op_queue_cut_off": "high",
    "osd_op_queue_mclock_client_op_lim": "0.000000",
    "osd_op_queue_mclock_client_op_res": "1.000000",
    "osd_op_queue_mclock_client_op_wgt": "1000.000000",
    "osd_op_queue_mclock_osd_subop_lim": "0.000000",
    "osd_op_queue_mclock_osd_subop_res": "1.000000",
    "osd_op_queue_mclock_osd_subop_wgt": "1000.000000",
    "osd_op_queue_mclock_recov_lim": "0.000000",
    "osd_op_queue_mclock_recov_res": "1.000000",
    "osd_op_queue_mclock_recov_wgt": "1.000000",

Is it feature really working? Am I doing something wrong?
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to