Hello,
I'm running a (very) small cluster and I'd like to turn on pg_autoscale.
In the documentation here > http://docs.ceph.com/docs/nautilus/rados/operations/placement-groups/ it tells me that running

ceph config set global osd_pool_default_autoscale_mode <mode>

should enable this by default, however running the below:

$ ceph config help osd_pool_default_autoscale_mode
Error ENOENT:

seems to tell me that the setting isn't correct.
By grepping around I found osd_pool_default_pg_autoscale_mode (with _pg after default) to be the correct setting, however it seems this can't be changed at runtime as docs suggest:

$ ceph config help osd_pool_default_pg_autoscale_mode
osd_pool_default_pg_autoscale_mode - Default PG autoscaling behavior for new pools
  (str, advanced)
  Default: warn
  Possible values:  off warn on
  Can update at runtime: false

So after enabling this via ceph-ansible by overriding the conf I tried to set it also on existing pools.
This is the output before my attempt:

$ ceph osd pool autoscale-status -f json-pretty | jq '[ .[] | {'pool_name': .pool_name, 'pg_autoscale_mode': .pg_autoscale_mode }]'

[
  {
    "pool_name": "cephfs_metadata",
    "pg_autoscale_mode": "warn"
  },
  {
    "pool_name": "default.rgw.meta",
    "pg_autoscale_mode": "warn"
  },
  {
    "pool_name": "cephfs_data",
    "pg_autoscale_mode": "warn"
  },
  {
    "pool_name": "default.rgw.control",
    "pg_autoscale_mode": "warn"
  },
  {
    "pool_name": ".rgw.root",
    "pg_autoscale_mode": "warn"
  },
  {
    "pool_name": "default.rgw.log",
    "pg_autoscale_mode": "warn"
  }
]

Then I tried executing the following:

$ docker exec -it ceph-mon-sagittarius ceph osd pool set cephfs_metadata
pg_autoscale_mode on
set pool 2 pg_autoscale_mode to on

However, while it seems the operation was successful, the following command returns:

$ docker exec -it ceph-mon-sagittarius ceph osd pool get cephfs_metadata pg_autoscale_mode
pg_autoscale_mode: warn


And since pg_autoscaler module is turned on, this is the output of ceph status:

ceph status
  cluster:
    id:     d5c50302-0d8e-47cb-ab86-c15842372900
    health: HEALTH_WARN
            5 pools have too many placement groups

  services:
    mon: 1 daemons, quorum sagittarius (age 5m)
    mgr: sagittarius(active, since 2m)
    mds: cephfs:1 {0=sagittarius=up:active}
    osd: 4 osds: 4 up, 4 in
    rgw: 1 daemon active (sagittarius.rgw0)

  data:
    pools:   6 pools, 184 pgs
    objects: 242.08k objects, 862 GiB
    usage:   1.7 TiB used, 27 TiB / 29 TiB avail
    pgs:     184 active+clean

What am I doing wrong?
Thank you.

Daniele
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to