osd_max_scrubs = 1 means each osd can only be involved in a single scrub
operation at a time. There isn't a setting for the maximum amount of scrubs
per cluster, just per osd.

On Sun, Jan 14, 2018, 12:23 PM Karun Josy <karunjo...@gmail.com> wrote:

> Hello,
>
> It appears that cluster is having many slow requests while it is scrubbing
> and deep scrubbing. Also sometimes we can see osds flapping.
>
> So we have put the flags : noscrub,nodeep-scrub
>
> When we unset it, 5 PGs start to scrub.
> Is there a way to limit it to one at a time?
>
> # ceph daemon osd.35 config show | grep scrub
>     "mds_max_scrub_ops_in_progress": "5",
>     "mon_scrub_inject_crc_mismatch": "0.000000",
>     "mon_scrub_inject_missing_keys": "0.000000",
>     "mon_scrub_interval": "86400",
>     "mon_scrub_max_keys": "100",
>     "mon_scrub_timeout": "300",
>     "mon_warn_not_deep_scrubbed": "0",
>     "mon_warn_not_scrubbed": "0",
>     "osd_debug_scrub_chance_rewrite_digest": "0",
>     "osd_deep_scrub_interval": "604800.000000",
>     "osd_deep_scrub_randomize_ratio": "0.150000",
>     "osd_deep_scrub_stride": "524288",
>     "osd_deep_scrub_update_digest_min_age": "7200",
>     "osd_max_scrubs": "1",
>     "osd_op_queue_mclock_scrub_lim": "0.001000",
>     "osd_op_queue_mclock_scrub_res": "0.000000",
>     "osd_op_queue_mclock_scrub_wgt": "1.000000",
>     "osd_requested_scrub_priority": "120",
>     "osd_scrub_auto_repair": "false",
>     "osd_scrub_auto_repair_num_errors": "5",
>     "osd_scrub_backoff_ratio": "0.660000",
>     "osd_scrub_begin_hour": "0",
>     "osd_scrub_chunk_max": "25",
>     "osd_scrub_chunk_min": "5",
>     "osd_scrub_cost": "52428800",
>     "osd_scrub_during_recovery": "false",
>     "osd_scrub_end_hour": "24",
>     "osd_scrub_interval_randomize_ratio": "0.500000",
>     "osd_scrub_invalid_stats": "true",
>     "osd_scrub_load_threshold": "0.500000",
>     "osd_scrub_max_interval": "604800.000000",
>     "osd_scrub_min_interval": "86400.000000",
>     "osd_scrub_priority": "5",
>     "osd_scrub_sleep": "0.000000",
>
>
> Karun
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to