Kirby Haze wrote:
> I tried to do
> $ ceph tell osd.* config set osd_op_queue wpq
> but got
> Error EPERM: error setting 'osd_op_queue' to 'wpq': (1) Operation not
> permitted
> - is there some other way how to enable wpq instead of mclock?
> 
> 
> I'd recommend in general for ~98% config changes to go through the `ceph
> config set _` because these ones get stored in the mon. `  ceph tell osd.*`
> config changes don't persist after a daemon reboots. daemons read their
> config from ceph.config and the config dump.
> 
> ceph config set osd osd_op_queue wpq
> 
> just a heads up that you need to restart all your osds for this command to
> take into effect.

Indeed. I did it that way because the cluster is not in production yet,
and it takes only a "systemctl restart ceph.target" to get rid of all my
experiments with config options.

> On another note, maybe check as well if you have writeback caches disabled
> since you are on hdds. write through mode is generally recommended.
> https://docs.ceph.com/en/latest/start/hardware-recommendations/#write-caches

Yes, of course. I discussed it in this very list a year or two ago.
It helps massively when the HDDs are near 100% busy. But this is not
my case now.

Thanks for your suggestions!

-Yenya

> On Tue, Oct 14, 2025 at 7:16 AM Jan Kasprzak <[email protected]> wrote:
> 
> > [replying to multiple messages at once]
> >
> > Anthony D'Atri wrote:
> > > `ceph osd df`
> > > `ceph osd dump | grep pool`
> >
> > Attached below.
> >
> > > >> Note that you have just 36 OSDs, and each of your EC PGs needs to
> > reserve 6 of them, so you have a certain gridlock factor.
> >
> > Yes, but I guess I should see at least some %iowait or something,
> > when HDDs are the bottleneck. Currently I see this:
> >
> > $ for i in $hosts; do ssh root@$i 'top -n 1 -b | grep ^%Cpu'; done
> > %Cpu(s):  0.3 us,  0.3 sy,  0.0 ni, 99.1 id,  0.3 wa,  0.0 hi,  0.1 si,
> > 0.0 st
> > %Cpu(s):  0.3 us,  0.5 sy,  0.0 ni, 98.4 id,  0.6 wa,  0.1 hi,  0.2 si,
> > 0.0 st
> > %Cpu(s):  0.2 us,  0.4 sy,  0.0 ni, 98.3 id,  0.9 wa,  0.1 hi,  0.2 si,
> > 0.0 st
> > %Cpu(s):  0.1 us,  0.3 sy,  0.0 ni, 99.6 id,  0.1 wa,  0.0 hi,  0.0 si,
> > 0.0 st
> > %Cpu(s):  0.1 us,  0.3 sy,  0.0 ni, 99.1 id,  0.5 wa,  0.0 hi,  0.1 si,
> > 0.0 st
> > %Cpu(s):  0.1 us,  0.2 sy,  0.0 ni, 99.5 id,  0.2 wa,  0.0 hi,  0.1 si,
> > 0.0 st
> > %Cpu(s):  0.1 us,  0.3 sy,  0.0 ni, 99.6 id,  0.1 wa,  0.0 hi,  0.0 si,
> > 0.0 st
> > %Cpu(s):  0.3 us,  0.3 sy,  0.0 ni, 98.5 id,  0.7 wa,  0.1 hi,  0.2 si,
> > 0.0 st
> > %Cpu(s):  0.1 us,  0.3 sy,  0.0 ni, 99.0 id,  0.4 wa,  0.1 hi,  0.1 si,
> > 0.0 st
> > (the hosts have 96 CPUs as seen in /proc/cpuinfo, so a single busy CPU
> > is slightly more than 1 % of total CPU time).
> >
> > > >> Plus the usual wpq vs mclock.
> > > >
> > > > Could you elaborate?
> > >
> > > Google ceph wpq mclock.  In the last couple of releases the mclock
> > > scheduler is default, but on hdds especially and especially with EC it
> > > can cause backfill to be super slow.  There are some improvements in the
> > > pipe, but a lot of people have just reverted to the wpq scheduler for
> > now.
> > > You can find instructions for that.
> >
> > I tried to do
> >
> > $ ceph tell osd.* config set osd_op_queue wpq
> >
> > but got
> >
> > Error EPERM: error setting 'osd_op_queue' to 'wpq': (1) Operation not
> > permitted
> >
> > - is there some other way how to enable wpq instead of mclock?
> >
> > Kirby Haze wrote:
> > > I will assume you are running the default mclock scheduler and not wpq.
> > I'm
> > > not too familiar with tuning mclock settings but this is the docs to look
> > > at
> > >
> > https://docs.ceph.com/en/latest/rados/configuration/mclock-config-ref/#recovery-backfill-options
> > >
> > > osd_max_backfills is set to 1 by default and this is the first thing I
> > > would tune if you want faster backfilling.
> >
> > Yes, but I tried to increase it to 30, but still got only one pg in
> > the backfilling state and the rest in backfill_wait.
> >
> > Stephan Hohn wrote:
> > > Using mclock scheduler you can do the following:
> > >
> > > set osd_mclock_override_recovery_settings to true e.g.
> >
> > And this is what I missed. By itself, setting osd_max_backfills
> > does nothing, unless the above option is set to true. With this,
> > I got all backfill_wait'ing PGs to backfilling state. This got me to
> > ~150-200 MB/s of recovery I/O:
> >
> > ceph tell osd.* config set osd_mclock_override_recovery_settings true
> > ceph tell osd.* config set osd_max_backfills 30
> >
> > Setting the high_recovery_ops profile increased the recovery speed
> > further to 300-400 MB/s:
> >
> > ceph tell osd.* config set osd_mclock_profile high_recovery_ops
> >
> > But still my cluster is mostly idle (see above the %Cpu stats),
> > and the recovery speed can probably reach 800-900 MB/s, as I have
> > 10GbaseT network. Maybe the balacer can schedule more than 11 PGs
> > for resharding?
> >
> > Wannes Smet wrote:
> > > Perhaps not the root cause, but might be worth looking at if you haven't
> > already:
> > >
> > >   *
> > > How is the host power profile configured? Eg, I'm running HPe:
> > > System Configuration > BIOS/Platform Configuration (RBSU) > Power
> > Management > Power Profile > Maximum Performance
> > >   *
> > > In which C-state are your cores running?  Use linux-cpupower or similar
> > tool to verify. From my experience if I don't configure anything 99% of the
> > time it's in C6 and I want it to be in C0/C1.
> >
> > According to "cpupower idle-info" my CPUs support POLL C1 C2 idle states,
> > and spend most of the time in C2. I tried to do
> >
> > # cpupower idle-set --disable 2
> >
> > on two of my hosts (thus disabling the C2 state), but ping over a minute
> > between them did not change in a significant way.
> >
> > > What about CPU wait states? Do you see any? To
> > > visualize and correlate with HDDs, I personally like nmon
> > > (http://kb.ictbanking.net/article.php?id=550&oid=1) then press lower
> > > case 'L' to get a long term graph of CPU usage. From my experience, if
> > > you see blue blocks ('W' if color isn't enabled), that's wait states
> > > and ideally you only want to see none at all. A very occasional blue
> > > (W) block might be ~acceptable but if it's more than that, there's
> > > very likely hardware (HDDs would be my main suspect) noticably dragging
> > > down performance.
> > >
> > > Pressing 'c' in nmon will toggle an overview per core. That'll give
> > > a bit more "visual" insight into how much time cores are spending in
> > > user/system/wait .
> > > In nmon, you can also press 'd' to toggle disk stats ('h' to show help).
> > >
> > > To correlate with disk activity: press 'd' to toggle a graph showing
> > > R/W activity on each disk.
> >
> > Interesting. I'll try it when I get time.
> >
> > So, thanks to all the replies!
> >
> > -Yenya
> >
> > $ ceph osd df
> > ID  CLASS  WEIGHT    REWEIGHT  SIZE     RAW USE  DATA     OMAP     META
> >  AVAIL    %USE   VAR   PGS  STATUS
> >  4    hdd  20.10739   1.00000   20 TiB  7.0 TiB  6.9 TiB  1.1 GiB   10
> > GiB   13 TiB  34.94  0.84  182      up
> >  9    hdd  20.10739   1.00000   20 TiB   11 TiB   10 TiB  800 MiB   15
> > GiB  9.5 TiB  52.66  1.26  209      up
> > 20    hdd  20.10739   1.00000   20 TiB   11 TiB   11 TiB  519 MiB   16
> > GiB  8.8 TiB  56.09  1.34  201      up
> > 28    hdd  20.10739   1.00000   20 TiB  7.0 TiB  6.9 TiB  934 MiB   10
> > GiB   13 TiB  34.76  0.83  180      up
> >  2    hdd  20.10739   1.00000   20 TiB  8.5 TiB  8.4 TiB  1.1 GiB   13
> > GiB   12 TiB  42.05  1.01  189      up
> > 10    hdd  20.10739   1.00000   20 TiB  6.5 TiB  6.4 TiB  704 MiB  9.5
> > GiB   14 TiB  32.43  0.78  169      up
> > 18    hdd  20.10739   1.00000   20 TiB  7.8 TiB  7.7 TiB  1.0 GiB   11
> > GiB   12 TiB  38.68  0.92  198      up
> > 29    hdd  20.10739   1.00000   20 TiB   11 TiB   11 TiB  1.2 GiB   16
> > GiB  8.8 TiB  56.17  1.34  220      up
> >  1    hdd  20.10739   1.00000   20 TiB  8.8 TiB  8.7 TiB  1.2 GiB   13
> > GiB   11 TiB  43.73  1.05  209      up
> > 11    hdd  20.10739   1.00000   20 TiB  8.0 TiB  7.9 TiB  861 MiB   11
> > GiB   12 TiB  39.89  0.95  193      up
> > 19    hdd  20.10739   1.00000   20 TiB  7.0 TiB  6.9 TiB  607 MiB   10
> > GiB   13 TiB  34.92  0.83  190      up
> > 27    hdd  20.10739   1.00000   20 TiB  8.5 TiB  8.4 TiB  1.2 GiB   13
> > GiB   12 TiB  42.21  1.01  187      up
> >  3    hdd  20.10739   1.00000   20 TiB  8.3 TiB  8.2 TiB  1.1 GiB   12
> > GiB   12 TiB  41.41  0.99  190      up
> > 13    hdd  20.10739   1.00000   20 TiB   11 TiB   11 TiB  774 MiB   15
> > GiB  9.4 TiB  53.03  1.27  208      up
> > 22    hdd  20.10739   1.00000   20 TiB  5.1 TiB  5.0 TiB  1.3 GiB  7.6
> > GiB   15 TiB  25.47  0.61  184      up
> > 31    hdd  20.10739   1.00000   20 TiB  6.3 TiB  6.2 TiB  663 MiB  9.1
> > GiB   14 TiB  31.17  0.74  175      up
> >  5    hdd  20.10739   1.00000   20 TiB  5.8 TiB  5.7 TiB  840 MiB  8.4
> > GiB   14 TiB  29.00  0.69  184      up
> > 12    hdd  20.10739   1.00000   20 TiB  7.5 TiB  7.4 TiB  907 MiB   11
> > GiB   13 TiB  37.53  0.90  177      up
> > 21    hdd  20.10739   1.00000   20 TiB  9.0 TiB  8.9 TiB  976 MiB   13
> > GiB   11 TiB  44.62  1.07  200      up
> > 30    hdd  20.10739   1.00000   20 TiB  8.4 TiB  8.3 TiB  1.0 GiB   12
> > GiB   12 TiB  41.76  1.00  193      up
> >  6    hdd  20.10739   1.00000   20 TiB  8.0 TiB  7.9 TiB  1.1 GiB   12
> > GiB   12 TiB  39.87  0.95  195      up
> > 14    hdd  20.10739   1.00000   20 TiB  9.0 TiB  8.9 TiB  1.1 GiB   13
> > GiB   11 TiB  44.75  1.07  209      up
> > 23    hdd  20.10739   1.00000   20 TiB  9.0 TiB  8.9 TiB  236 MiB   13
> > GiB   11 TiB  44.81  1.07  188      up
> > 33    hdd  20.10739   1.00000   20 TiB  7.2 TiB  7.1 TiB  355 MiB   11
> > GiB   13 TiB  35.94  0.86  161      up
> >  8    hdd  20.10739   1.00000   20 TiB  9.7 TiB  9.6 TiB  1.0 GiB   14
> > GiB   10 TiB  48.29  1.15  191      up
> > 16    hdd  20.10739   1.00000   20 TiB  6.9 TiB  6.8 TiB  533 MiB   10
> > GiB   13 TiB  34.18  0.82  192      up
> > 25    hdd  20.10739   1.00000   20 TiB  8.3 TiB  8.2 TiB  1.0 GiB   12
> > GiB   12 TiB  41.45  0.99  187      up
> > 32    hdd  20.10739   1.00000   20 TiB  8.6 TiB  8.5 TiB  1.6 GiB   12
> > GiB   12 TiB  42.55  1.02  200      up
> >  7    hdd  20.10739   1.00000   20 TiB   10 TiB   10 TiB  832 MiB   15
> > GiB  9.9 TiB  50.52  1.21  192      up
> > 17    hdd  20.10739   1.00000   20 TiB  9.8 TiB  9.7 TiB  832 MiB   14
> > GiB   10 TiB  48.85  1.17  205      up
> > 24    hdd  20.10739   1.00000   20 TiB  8.5 TiB  8.4 TiB  918 MiB   13
> > GiB   12 TiB  42.44  1.01  189      up
> > 34    hdd  20.10739   1.00000   20 TiB  7.9 TiB  7.8 TiB  720 MiB   12
> > GiB   12 TiB  39.52  0.94  187      up
> >  0    hdd  20.10739   1.00000   20 TiB  9.6 TiB  9.5 TiB  793 MiB   14
> > GiB   10 TiB  47.91  1.15  207      up
> > 15    hdd  20.10739   1.00000   20 TiB  9.3 TiB  9.2 TiB  325 MiB   13
> > GiB   11 TiB  46.21  1.10  198      up
> > 26    hdd  20.10739   1.00000   20 TiB  8.6 TiB  8.5 TiB  702 MiB   12
> > GiB   12 TiB  42.59  1.02  197      up
> > 35    hdd  20.10739   1.00000   20 TiB  8.8 TiB  8.7 TiB  1.6 GiB   13
> > GiB   11 TiB  43.83  1.05  179      up
> >                         TOTAL  724 TiB  303 TiB  299 TiB   32 GiB  442
> > GiB  421 TiB  41.84
> > MIN/MAX VAR: 0.61/1.34  STDDEV: 7.12
> >
> >
> > $ ceph osd dump | grep pool
> > pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 46 flags
> > hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr
> > read_balance_score 37.50
> > pool 2 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 68 lfor 0/0/66
> > flags hashpspool stripe_width 0 application mystorage read_balance_score
> > 4.52
> > pool 4 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 1
> > object_hash rjenkins pg_num 128 pgp_num 79 pgp_num_target 128
> > autoscale_mode on last_change 1370 lfor 0/0/423 flags
> > hashpspool,ec_overwrites stripe_width 16384 application mystorage
> > pool 5 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 157 lfor
> > 0/0/155 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 3.39
> > pool 6 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 2
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 157
> > lfor 0/0/155 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 9 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309 lfor
> > 0/0/249 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 3.36
> > pool 10 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 4
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309
> > lfor 0/0/251 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 11 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309 lfor
> > 0/0/251 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 4.52
> > pool 12 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 5
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309
> > lfor 0/0/253 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 13 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309 lfor
> > 0/0/253 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 3.36
> > pool 14 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 6
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309
> > lfor 0/0/255 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 15 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309 lfor
> > 0/0/255 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 2.27
> > pool 16 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 7
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309
> > lfor 0/0/257 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 17 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309 lfor
> > 0/0/257 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 3.39
> > pool 18 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 8
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309
> > lfor 0/0/259 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 19 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309 lfor
> > 0/0/259 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 4.52
> > pool 20 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 9
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309
> > lfor 0/0/261 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 21 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309 lfor
> > 0/0/261 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 4.50
> > pool 22 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 10
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309
> > lfor 0/0/263 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 23 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309 lfor
> > 0/0/263 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 4.51
> > pool 24 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 11
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 309
> > lfor 0/0/265 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 25 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348 lfor
> > 0/0/322 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 3.39
> > pool 26 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 12
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348
> > lfor 0/0/324 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 27 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348 lfor
> > 0/0/324 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 4.49
> > pool 28 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 13
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348
> > lfor 0/0/326 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 29 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348 lfor
> > 0/0/326 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 3.37
> > pool 30 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 14
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348
> > lfor 0/0/328 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 31 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348 lfor
> > 0/0/328 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 3.38
> > pool 32 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 15
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348
> > lfor 0/0/330 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 33 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348 lfor
> > 0/0/330 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 4.52
> > pool 34 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 16
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348
> > lfor 0/0/332 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 35 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348 lfor
> > 0/0/332 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 3.38
> > pool 36 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 17
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348
> > lfor 0/0/334 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 37 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348 lfor
> > 0/0/334 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 3.38
> > pool 38 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 18
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348
> > lfor 0/0/336 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 39 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348 lfor
> > 0/0/336 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 2.27
> > pool 40 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 19
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348
> > lfor 0/0/338 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 41 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348 lfor
> > 0/0/338 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 4.52
> > pool 42 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 20
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348
> > lfor 0/0/340 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 43 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348 lfor
> > 0/0/340 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 3.36
> > pool 44 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 21
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348
> > lfor 0/0/342 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 45 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348 lfor
> > 0/0/342 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 3.37
> > pool 46 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 22
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348
> > lfor 0/0/344 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> > pool 47 'xxx.meta' replicated size 3 min_size 2 crush_rule 0 object_hash
> > rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348 lfor
> > 0/0/344 flags hashpspool stripe_width 0 application mystorage
> > read_balance_score 3.37
> > pool 48 'xxx.data' erasure profile k4m2 size 6 min_size 5 crush_rule 23
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 348
> > lfor 0/0/346 flags hashpspool,ec_overwrites stripe_width 16384 application
> > mystorage
> >
> > --
> > | Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}>
> > |
> > | https://www.fi.muni.cz/~kas/                        GPG: 4096R/A45477D5
> > |
> >     We all agree on the necessity of compromise. We just can't agree on
> >     when it's necessary to compromise.                     --Larry Wall
> > _______________________________________________
> > ceph-users mailing list -- [email protected]
> > To unsubscribe send an email to [email protected]
> >

-- 
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| https://www.fi.muni.cz/~kas/                        GPG: 4096R/A45477D5 |
    We all agree on the necessity of compromise. We just can't agree on
    when it's necessary to compromise.                     --Larry Wall
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to