On Tue, Jan 8, 2019 at 12:48 PM Thomas Byrne - UKRI STFC
<[email protected]> wrote:
>
> For what it's worth, I think the behaviour Pardhiv and Bryan are describing 
> is not quite normal, and sounds similar to something we see on our large 
> luminous cluster with elderly (created as jewel?) monitors. After large 
> operations which result in the mon stores growing to 20GB+, leaving the 
> cluster with all PGs active+clean for days/weeks will usually not result in 
> compaction, and the store sizes will slowly grow.
>
> I've played around with restarting monitors with and without 
> mon_compact_on_start set, and using 'ceph tell mon.[id] compact'. For this 
> cluster, I found the most reliable way to trigger a compaction was to restart 
> all monitors daemons, one at a time, *without* compact_on_start set. The 
> stores rapidly compact down to ~1GB in a minute or less after the last mon 
> restarts.

+1, exactly the same issue and workaround here. See this thread, which
had no resolution:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-August/029423.html

-- dan

>
>
> It's worth noting that occasionally (1 out of every 10 times, or fewer) the 
> stores will compact without prompting after all PGs become active+clean.
>
> I haven't put much time into this as I am planning on reinstalling the 
> monitors to get rocksDB mon stores. If the problem persists with the new 
> monitors I'll have another look at it.
>
> Cheers
> Tom
>
> > -----Original Message-----
> > From: ceph-users <[email protected]> On Behalf Of Wido
> > den Hollander
> > Sent: 08 January 2019 08:28
> > To: Pardhiv Karri <[email protected]>; Bryan Stillwell
> > <[email protected]>
> > Cc: ceph-users <[email protected]>
> > Subject: Re: [ceph-users] Is it possible to increase Ceph Mon store?
> >
> >
> >
> > On 1/7/19 11:15 PM, Pardhiv Karri wrote:
> > > Thank you Bryan, for the information. We have 816 OSDs of size 2TB each.
> > > The mon store too big popped up when no rebalancing happened in that
> > > month. It is slightly above the 15360 threshold around 15900 or 16100
> > > and stayed there for more than a week. We ran the "ceph tell mon.[ID]
> > > compact" to get it back earlier this week. Currently the mon store is
> > > around 12G on each monitor. If it doesn't grow then I won't change the
> > > value but if it grows and gives the warning then I will increase it
> > > using "mon_data_size_warn".
> > >
> >
> > This is normal. The MONs will keep a history of OSDMaps if one or more PGs
> > are not active+clean
> >
> > They will trim after all the PGs are clean again, nothing to worry about.
> >
> > You can increase the setting for the warning, but that will not shrink the
> > database.
> >
> > Just make sure your monitors have enough free space.
> >
> > Wido
> >
> > > Thanks,
> > > Pardhiv Karri
> > >
> > >
> > >
> > > On Mon, Jan 7, 2019 at 1:55 PM Bryan Stillwell <[email protected]
> > > <mailto:[email protected]>> wrote:
> > >
> > >     I believe the option you're looking for is mon_data_size_warn.  The
> > >     default is set to 16106127360.____
> > >
> > >     __ __
> > >
> > >     I've found that sometimes the mons need a little help getting
> > >     started with trimming if you just completed a large expansion.
> > >     Earlier today I had a cluster where the mon's data directory was
> > >     over 40GB on all the mons.  When I restarted them one at a time with
> > >     'mon_compact_on_start = true' set in the '[mon]' section of
> > >     ceph.conf, they stayed around 40GB in size.   However, when I was
> > >     about to hit send on an email to the list about this very topic, the
> > >     warning cleared up and now the data directory is now between 1-3GB
> > >     on each of the mons.  This was on a cluster with >1900 OSDs.____
> > >
> > >     __ __
> > >
> > >     Bryan____
> > >
> > >     __ __
> > >
> > >     *From: *ceph-users <[email protected]
> > >     <mailto:[email protected]>> on behalf of Pardhiv
> > >     Karri <[email protected] <mailto:[email protected]>>
> > >     *Date: *Monday, January 7, 2019 at 11:08 AM
> > >     *To: *ceph-users <[email protected]
> > >     <mailto:[email protected]>>
> > >     *Subject: *[ceph-users] Is it possible to increase Ceph Mon
> > > store?____
> > >
> > >     __ __
> > >
> > >     Hi, __ __
> > >
> > >     __ __
> > >
> > >     We have a large Ceph cluster (Hammer version). We recently saw its
> > >     mon store growing too big > 15GB on all 3 monitors without any
> > >     rebalancing happening for quiet sometime. We have compacted the DB
> > >     using  "#ceph tell mon.[ID] compact" for now. But is there a way to
> > >     increase the size of the mon store to 32GB or something to avoid
> > >     getting the Ceph health to warning state due to Mon store growing
> > >     too big?____
> > >
> > >     __ __
> > >
> > >     -- ____
> > >
> > >     Thanks,____
> > >
> > >     *P**ardhiv **K**arri*
> > >
> > >
> > >     ____
> > >
> > >     __ __
> > >
> > >
> > >
> > > --
> > > *Pardhiv Karri*
> > > "Rise and Rise again untilLAMBSbecome LIONS"
> > >
> > >
> > >
> > > _______________________________________________
> > > ceph-users mailing list
> > > [email protected]
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to