[ceph-users] blustore osd nearfull but no pgs on it

2023-11-27 Thread Debian

Hi,

after a massive rebalance(tunables) my small SSD-OSDs are getting full, 
i changed my crush rules so there are actual no pgs/pools on it, but the 
disks stay full:


ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus 
(stable)


ID CLASS WEIGHT REWEIGHT SIZE    RAW USE DATA    OMAP META 
AVAIL    %USE  VAR  PGS STATUS TYPE NAME
158   ssd    0.21999  1.0 224 GiB 194 GiB 193 GiB  22 MiB 1002 MiB   
30 GiB 86.68 1.49   0 up osd.158


inferring bluefs devices from bluestore path
1 : device size 0x37e440 : own 0x[1ad3f0~23c60] = 
0x23c60 : using 0x3963(918 MiB) : bluestore has 0x46e2d(18 
GiB) available


when i recreate the osd the osd gets full again

any suggestion?

thx & best regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] blustore osd nearfull but no pgs on it

2023-11-17 Thread Debian

Hi,

after a massive rebalance(tunables) my small SSD-OSDs are getting full, 
i changed my crush rules so there are actual no pgs/pools on it, but the 
disks stay full:


ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus 
(stable)


ID CLASS WEIGHT REWEIGHT SIZE    RAW USE DATA    OMAP META 
AVAIL    %USE  VAR  PGS STATUS TYPE NAME
158   ssd    0.21999  1.0 224 GiB 194 GiB 193 GiB  22 MiB 1002 MiB   
30 GiB 86.68 1.49   0 up osd.158


inferring bluefs devices from bluestore path
1 : device size 0x37e440 : own 0x[1ad3f0~23c60] = 
0x23c60 : using 0x3963(918 MiB) : bluestore has 0x46e2d(18 
GiB) available


when i recreate the osd the osd gets full again

any suggestion?

thx & best regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io