Am 02.03.20 um 18:16 schrieb Reed Dier:
> Easiest way I know would be to use
> $ ceph tell osd.X compact
> 
> This is what cures that whenever I have metadata spillover.

no that does not help. Also keep in mind that in my case the metadata
hasn't spilled over instead i added after osd creation a ssd device.

Greets,
Stefan

> 
> Reed
> 
>> On Mar 2, 2020, at 3:32 AM, Stefan Priebe - Profihost AG
>> <s.pri...@profihost.ag <mailto:s.pri...@profihost.ag>> wrote:
>>
>> Hello,
>>
>> i added a db device to my osds running nautilus. The DB data migratet
>> over some days from the hdd to ssd (db device).
>>
>> But now it seems all are stuck at:
>> # ceph health detail
>> HEALTH_WARN BlueFS spillover detected on 8 OSD(s)
>> BLUEFS_SPILLOVER BlueFS spillover detected on 8 OSD(s)
>>     osd.0 spilled over 128 KiB metadata from 'db' device (12 GiB used
>> of 185 GiB) to slow device
>>     osd.1 spilled over 3.4 MiB metadata from 'db' device (12 GiB used
>> of 185 GiB) to slow device
>>     osd.2 spilled over 128 KiB metadata from 'db' device (12 GiB used
>> of 185 GiB) to slow device
>>     osd.3 spilled over 128 KiB metadata from 'db' device (12 GiB used
>> of 185 GiB) to slow device
>>     osd.4 spilled over 128 KiB metadata from 'db' device (12 GiB used
>> of 185 GiB) to slow device
>>     osd.5 spilled over 128 KiB metadata from 'db' device (12 GiB used
>> of 185 GiB) to slow device
>>     osd.6 spilled over 128 KiB metadata from 'db' device (12 GiB used
>> of 185 GiB) to slow device
>>     osd.7 spilled over 128 KiB metadata from 'db' device (12 GiB used
>> of 185 GiB) to slow device
>>
>> any idea how to force ceph to move the last 128kb to the db device?
>>
>> Greets,
>> Stefan
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io <mailto:ceph-users@ceph.io>
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>> <mailto:ceph-users-le...@ceph.io>
> 
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to