Hi,

I am reading reading some documentation about mClock and have two questions.

First, about the IOPS. Are those IOPS disk IOPS or other kind of IOPS? And what 
the assumption of those? (Like block size, sequential or random reads/writes)?

And the second question,

How mClock calculates its profiles? I have my lab cluster running Quincy, and I 
have this parameters for mClock:

"osd_mclock_max_capacity_iops_hdd": "450.000000",
"osd_mclock_profile": "balanced",

According to the documentation: 
https://docs.ceph.com/en/quincy/rados/configuration/mclock-config-ref/#balanced 
I am expecting to have:
"osd_mclock_scheduler_background_best_effort_lim": "999999",
"osd_mclock_scheduler_background_best_effort_res": "90",
"osd_mclock_scheduler_background_best_effort_wgt": "2",
"osd_mclock_scheduler_background_recovery_lim": "675",
"osd_mclock_scheduler_background_recovery_res": "180",
"osd_mclock_scheduler_background_recovery_wgt": "1",
"osd_mclock_scheduler_client_lim": "450",
"osd_mclock_scheduler_client_res": "180", "osd_mclock_scheduler_client_wgt": 
"1",

But what I get is:

"osd_mclock_scheduler_background_best_effort_lim": "999999",
"osd_mclock_scheduler_background_best_effort_res": "18",
"osd_mclock_scheduler_background_best_effort_wgt": "2",
"osd_mclock_scheduler_background_recovery_lim": "135",
"osd_mclock_scheduler_background_recovery_res": "36",
"osd_mclock_scheduler_background_recovery_wgt": "1",
"osd_mclock_scheduler_client_lim": "90",
"osd_mclock_scheduler_client_res": "36",
"osd_mclock_scheduler_client_wgt": "1",

Which seems very low according to what my disk seems to be able to handle.

Is this calculation the expected one? Or did I miss something on how those 
profiles are populated?

Luis Domingues
Proton AG
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to