Hi Wido,
Are your mon's using rocksdb or still leveldb?
Are your mon stores trimming back to a small size after HEALTH_OK was restored?
One v12.2.2 cluster here just started showing the "is using a lot of
disk space" warning on one of our mons. In fact all three mons are now
using >16GB. I
Thanks, Wido -- words to live by.
I had all kinds of problems with mon DBs not compacting under Firefly, really
pointed out the benefit of having ample space on the mons -- and the necessity
of having those DB's live on something faster than an LFF HDD.
I've had this happen when using
On 05/02/18 15:54, Wes Dillingham wrote:
> Good data point on not trimming when non active+clean PGs are present.
> So am I reading this correct? It grew to 32GB? Did it end up growing
> beyond that, what was the max?
The largest Mon store size I've seen (in a 3000-OSD cluster) was about 66GB.
On 02/05/2018 04:54 PM, Wes Dillingham wrote:
Good data point on not trimming when non active+clean PGs are present.
So am I reading this correct? It grew to 32GB? Did it end up growing
beyond that, what was the max?Also is only ~18PGs per OSD a reasonable
amount of PGs per OSD? I would think
Good data point on not trimming when non active+clean PGs are present. So
am I reading this correct? It grew to 32GB? Did it end up growing beyond
that, what was the max? Also is only ~18PGs per OSD a reasonable amount of
PGs per OSD? I would think about quadruple that would be ideal. Is this an
On Sat, 3 Feb 2018, Wido den Hollander wrote:
> Hi,
>
> I just wanted to inform people about the fact that Monitor databases can grow
> quite big when you have a large cluster which is performing a very long
> rebalance.
>
> I'm posting this on ceph-users and ceph-large as it applies to both,
Hi,
I just wanted to inform people about the fact that Monitor databases can
grow quite big when you have a large cluster which is performing a very
long rebalance.
I'm posting this on ceph-users and ceph-large as it applies to both, but
you'll see this sooner on a cluster with a lof of