Hi Allwin,

El 24/3/20 a las 12:24, Alwin Antreich escribió:
On Tue, Mar 24, 2020 at 10:34:15AM +0100, Eneko Lacunza wrote:
We're seeing a spillover issue with Ceph, using 14.2.8:
[...]
3. ceph health detail
    HEALTH_WARN BlueFS spillover detected on 3 OSD
    BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD
    osd.3 spilled over 5 MiB metadata from 'db' device (556 MiB used of
    6.0 GiB) to slow device
    osd.4 spilled over 5 MiB metadata from 'db' device (552 MiB used of
    6.0 GiB) to slow device
    osd.5 spilled over 5 MiB metadata from 'db' device (551 MiB used of
    6.0 GiB) to slow device

I may be overlooking something, any idea? Just found also the following ceph
issue:

https://tracker.ceph.com/issues/38745

5MiB of metadata in slow isn't a big problem, but cluster is permanently in
health Warning state... :)
The DB/WAL device is to small and all the new metadata has to be written
to the slow device. This will destroy performance.

I think the size changes, as the DB gets compacted.
Yes. But it isn't too small... it's 6 GiB and there's only ~560MiB of data.

The easiest way ist to destroy and re-create the OSD with a bigger
DB/WAL. The guideline from Facebook for RocksDB is 3/30/300 GB.

It's well below the 3GiB limit in the guideline ;)

Thanks a lot
Eneko

--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to