Mons won't compact and clean up old maps while any PG is in a non-clean
state.  What is your `ceph status`?  I would guess this isn't your problem,
but thought I'd throw it out there just in case.

Also in Hammer, OSDs started telling each other when they clean up maps and
this caused a map pointer leak where the pointer for which map to keep was
set to NULL and OSDs would stop deleting maps until they were restarted
which forced them to ask the mons for what that pointer should be.  This
bug was fixed in 0.94.8.  You can check to see if you're running into this
by performing a `du` on the meta folder inside of an OSD.  I know your
complaint is about mons getting really large, but it just sounded familiar
to this issue with OSDs getting really large with maps.

On Sun, Feb 25, 2018 at 8:31 AM yu2xiangyang <[email protected]> wrote:

>
> Hi cephers,
>
>
> Recently I have met a problem with leveldb which is set as monitor store
> by default.
>
>
> My ceph version is 0.94.5.
>
>
> I have a disk format as xfs,and mount the disk to
> /var/lib/ceph/mon/mon.<id>, and the size is 100GB.
>
>
> The monitor store size is increasing 1GB per hours and never seems compact
> and my monitor size has ever reached 60GB.
>
>
> I stop the monitor and backup the monitor data for analysis , and the size
> is 23GB.
>
>
> I find that after manual compact range paxos 10000 20000(in fact , key
> paxos 10000 and key paxos 20000 has already been delelted and ceph has
> already compact the range paxos 10000 to 20000) After compaction, the
> monitor store size is only 489MB.
>
>
> Actually we can compact the monitor store with ceph tell mon.xxx compact
> command, but the monitor size is exploding. and too big, there must be some
> problem with levedb or using leveldb in ceph.
>
>
> Has anyone has ever analysis the monitor store problem with leveldb?
>
>
> Best regards,
>  Brandy
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to