I've just upgraded our ceph cluster from pacific 16.2.15 -> Reef 18.2.7

After that I see the warnings:

[WRN] BLUEFS_SPILLOVER: 5 OSD(s) experiencing BlueFS spillover
     osd.110 spilled over 4.5 GiB metadata from 'db' device (8.0 GiB used of 83 
GiB) to slow device
     osd.455 spilled over 1.1 GiB metadata from 'db' device (11 GiB used of 83 
GiB) to slow device
     osd.533 spilled over 426 MiB metadata from 'db' device (10 GiB used of 83 
GiB) to slow device
     osd.560 spilled over 389 MiB metadata from 'db' device (9.8 GiB used of 83 
GiB) to slow device
     osd.597 spilled over 8.6 GiB metadata from 'db' device (7.7 GiB used of 83 
GiB) to slow device
[WRN] BLUESTORE_SLOW_OP_ALERT: 4 OSD(s) experiencing slow operations in 
BlueStore
     osd.410 observed slow operation indications in BlueStore
     osd.443 observed slow operation indications in BlueStore
     osd.508 observed slow operation indications in BlueStore
     osd.593 observed slow operation indications in BlueStore

I've tried to run  ceph tell osd.XXX compact with no result.

Bluefs stats:

ceph tell osd.110 bluefs stats
1 : device size 0x14b33fe000 : using 0x202c00000(8.0 GiB)
2 : device size 0x8e8ffc00000 : using 0x5d31d150000(5.8 TiB)
RocksDBBlueFSVolumeSelector
>>Settings<< extra=0 B, l0_size=1 GiB, l_base=1 GiB, l_multi=8 B
DEV/LEV     WAL         DB          SLOW        *           *           REAL    
    FILES
LOG         0 B         16 MiB      0 B         0 B         0 B         15 MiB  
    1
WAL         0 B         18 MiB      0 B         0 B         0 B         6.3 MiB 
    1
DB          0 B         8.0 GiB     0 B         0 B         0 B         8.0 GiB 
    140
SLOW        0 B         0 B         4.5 GiB     0 B         0 B         4.5 GiB 
    78
TOTAL       0 B         8.0 GiB     4.5 GiB     0 B         0 B         0 B     
    220
MAXIMUMS:
LOG         0 B         25 MiB      0 B         0 B         0 B         21 MiB
WAL         0 B         118 MiB     0 B         0 B         0 B         93 MiB
DB          0 B         8.2 GiB     0 B         0 B         0 B         8.2 GiB
SLOW        0 B         0 B         14 GiB      0 B         0 B         14 GiB
TOTAL       0 B         8.2 GiB     14 GiB      0 B         0 B         0 B
>> SIZE <<  0 B         79 GiB      8.5 TiB

Help with what to do next will, be much appreciated


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to