On older releases, at least, inflated DBs correlated with miserable recovery
performance and lots of slow requests. The DB and OSDs were also on HDD FWIW.
A single drive failure would result in substantial RBD impact.
> On Feb 18, 2019, at 3:28 AM, Dan van der Ster wrote:
>
> Not
OK, sure will restart the ceph-mon (starting from non leader first,
and then last leader node).
On Mon, Feb 18, 2019 at 4:59 PM Dan van der Ster wrote:
>
> Not really.
>
> You should just restart your mons though -- if done one at a time it
> has zero impact on your clients.
>
> -- dan
>
>
> On
Not really.
You should just restart your mons though -- if done one at a time it
has zero impact on your clients.
-- dan
On Mon, Feb 18, 2019 at 12:11 PM M Ranga Swami Reddy
wrote:
>
> Hi Sage - If the mon data increases, is this impacts the ceph cluster
> performance (ie on ceph osd bench,
On Thu, Feb 14, 2019 at 2:31 PM Sage Weil wrote:
>
> On Thu, 7 Feb 2019, Dan van der Ster wrote:
> > On Thu, Feb 7, 2019 at 12:17 PM M Ranga Swami Reddy
> > wrote:
> > >
> > > Hi Dan,
> > > >During backfilling scenarios, the mons keep old maps and grow quite
> > > >quickly. So if you have
Hi Sage - If the mon data increases, is this impacts the ceph cluster
performance (ie on ceph osd bench, etc)?
On Fri, Feb 15, 2019 at 3:13 PM M Ranga Swami Reddy
wrote:
>
> today I again hit the warn with 30G also...
>
> On Thu, Feb 14, 2019 at 7:39 PM Sage Weil wrote:
> >
> > On Thu, 7 Feb
today I again hit the warn with 30G also...
On Thu, Feb 14, 2019 at 7:39 PM Sage Weil wrote:
>
> On Thu, 7 Feb 2019, Dan van der Ster wrote:
> > On Thu, Feb 7, 2019 at 12:17 PM M Ranga Swami Reddy
> > wrote:
> > >
> > > Hi Dan,
> > > >During backfilling scenarios, the mons keep old maps and
Sure, will this. For now I have creased the size to 30G (from 15G).
On Thu, Feb 14, 2019 at 7:39 PM Sage Weil wrote:
>
> On Thu, 7 Feb 2019, Dan van der Ster wrote:
> > On Thu, Feb 7, 2019 at 12:17 PM M Ranga Swami Reddy
> > wrote:
> > >
> > > Hi Dan,
> > > >During backfilling scenarios, the
On Thu, 7 Feb 2019, Dan van der Ster wrote:
> On Thu, Feb 7, 2019 at 12:17 PM M Ranga Swami Reddy
> wrote:
> >
> > Hi Dan,
> > >During backfilling scenarios, the mons keep old maps and grow quite
> > >quickly. So if you have balancing, pg splitting, etc. ongoing for
> > >awhile, the mon stores
Alternatively, will increase the mon_data_size to 30G (from 15G)..
Thanks
Swami
On Thu, Feb 7, 2019 at 8:44 PM Dan van der Ster wrote:
>
> On Thu, Feb 7, 2019 at 4:12 PM M Ranga Swami Reddy
> wrote:
> >
> > >Compaction isn't necessary -- you should only need to restart all
> > >peon's then
On Thu, Feb 7, 2019 at 4:12 PM M Ranga Swami Reddy wrote:
>
> >Compaction isn't necessary -- you should only need to restart all
> >peon's then the leader. A few minutes later the db's should start
> >trimming.
>
> As we on production cluster, which may not be safe to restart the
> ceph-mon,
>Compaction isn't necessary -- you should only need to restart all
>peon's then the leader. A few minutes later the db's should start
>trimming.
As we on production cluster, which may not be safe to restart the
ceph-mon, instead prefer to do the compact on non-leader mons.
Is this ok?
Thanks
On Thu, Feb 7, 2019 at 12:17 PM M Ranga Swami Reddy
wrote:
>
> Hi Dan,
> >During backfilling scenarios, the mons keep old maps and grow quite
> >quickly. So if you have balancing, pg splitting, etc. ongoing for
> >awhile, the mon stores will eventually trigger that 15GB alarm.
> >But the intended
Hi Sage
Sure, we will increase the mon_data_size to 30G to avoid this type of
warning. And currently we are using 500G disk here. I guss, which
should good enough here.
Thanks
Swami
On Wed, Feb 6, 2019 at 5:56 PM Sage Weil wrote:
>
> Hi Swami
>
> The limit is somewhat arbitrary, based on
Hi Dan,
>During backfilling scenarios, the mons keep old maps and grow quite
>quickly. So if you have balancing, pg splitting, etc. ongoing for
>awhile, the mon stores will eventually trigger that 15GB alarm.
>But the intended behavior is that once the PGs are all active+clean,
>the old maps
Hi,
With HEALTH_OK a mon data dir should be under 2GB for even such a large cluster.
During backfilling scenarios, the mons keep old maps and grow quite
quickly. So if you have balancing, pg splitting, etc. ongoing for
awhile, the mon stores will eventually trigger that 15GB alarm.
But the
Hi Swami
The limit is somewhat arbitrary, based on cluster sizes we had seen when
we picked it. In your case it should be perfectly safe to increase it.
sage
On Wed, 6 Feb 2019, M Ranga Swami Reddy wrote:
> Hello - Are the any limits for mon_data_size for cluster with 2PB
> (with 2000+
Hello - Are the any limits for mon_data_size for cluster with 2PB
(with 2000+ OSDs)?
Currently it set as 15G. What is logic behind this? Can we increase
when we get the mon_data_size_warn messages?
I am getting the mon_data_size_warn message even though there a ample
of free space on the disk
17 matches
Mail list logo