Hi, I have a Nautilus installation version 14.2.1 with a very unbalanced
cephfs pool, I have 430 osd in the cluster but this pool only have 8 PG
and PGP and 118 TB used :
# ceph -s
cluster:
id: a2269da7-e399-484a-b6ae-4ee1a31a4154
health: HEALTH_WARN
1 nearfull osd(s)
2 pool(s) nearfull
services:
mon: 3 daemons, quorum mon21,mon22,mon23 (age 7M)
mgr: mon23(active, since 8M), standbys: mon22, mon21
mds: cephfs:2 {0=mon21=up:active,1=mon22=up:active} 1 up:standby
osd: 430 osds: 430 up, 430 in
data:
pools: 2 pools, 16 pgs
objects: 10.07M objects, 38 TiB
usage: 118 TiB used, 4.5 PiB / 4.6 PiB avail
pgs: 15 active+clean
1 active+clean+scrubbing+deep
# ceph osd pool get cephfs_data pg_num
pg_num: 8
Due to this bad configuration I have this warning message:
# ceph status
cluster:
id: a2269da7-e399-484a-b6ae-4ee1a31a4154
health: HEALTH_WARN
1 nearfull osd(s)
2 pool(s) nearfull
I've discover that some osd are full:
# ceph osd status
| 113 | osd23 | 9824G | 1351G | 0 | 0 | 0 | 0 |
exists,nearfull,up |
I've tried to reweight this osd:
ceph osd reweight osd.113 0.9
But the process of reweight doesn't start. Otherwise I've tried to
increase the PG and PGP numbers but it' doesn't work.
# ceph osd pool set cephfs_data pg_num 128
set pool 1 pg_num to 128
# ceph osd pool get cephfs_data pg_num
pg_num: 8
What could be the reason for this problem?
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]