are you running the ceph insights mgr plugin? i was, and my cluster did
this on rebalance. turned it off, it's fine.

On Fri, Jul 10, 2020 at 5:17 PM Michael Fladischer <[email protected]> wrote:

> Hi,
>
> our cluster is on Octopus 15.2.4. We noticed that our MON all ran out of
> space yesterday because the store.db folder kept growing until it filled
> up the filesystem. We added more space to the MON nodes but store.db
> keeps growing.
>
> Right now it's ~220GiB on the two MON nodes that are active. We shut
> down on MON node when it hit ~98GiB and it seems that it trimmed its
> local store.db down to 102MiB and now also keeps growing again.
>
> Checking the keys in store.db while the MON is offline shows a lot of
> "logm" and "osdmap" keys:
>
> ceph-monstore-tool <path> dump-keys|awk '{print $1}'|uniq -c
>       86 auth
>        2 config
>       11 health
>   275929 logm
>       55 mds_health
>        1 mds_metadata
>      602 mdsmap
>      599 mgr
>        1 mgr_command_descs
>        3 mgr_metadata
>      209 mgrstat
>      461 mon_config_key
>        1 mon_sync
>        7 monitor
>        1 monitor_store
>        7 monmap
>      454 osd_metadata
>        1 osd_pg_creating
>     4804 osd_snap
>   138366 osdmap
>      538 paxos
>        5 pgmap
>
> I already tried compacting it with "ceph tell ..." and
> "ceph-monstore-tool <path> compact" but it stayed the same size. Also
> copying it with "ceph-monstore-tool <path> store-copy <new-path>" just
> created a copy of the same size.
>
> Out cluster is currently in WARN status because we are low on space and
> several OSDs are in a backfill_full state. Could this be related?
>
> Regards,
> Michael
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to