> In particular, when using leveldb, stalls while reading or writing to
> the store - typically, leveldb is compacting when this happens. This
> leads to all sorts of timeouts to be triggered, but the really annoying
> one would be the lease timeout, which tends to result in flapping quorum.
>
, 2017 2:35 AM
To: Martin Palma; CEPH list
Subject: Re: [ceph-users] mon.mon01 store is getting too big! 18119 MB
= 15360 MB -- 94% avail
Op 31 januari 2017 om 10:22 schreef Martin Palma :
Hi all,
our cluster is currently performing a big expansion and is in recovery
mode (we doubled in size
gt;> Sent: Tuesday, January 31, 2017 2:35 AM
>> To: Martin Palma; CEPH list
>> Subject: Re: [ceph-users] mon.mon01 store is getting too big! 18119 MB
>>>
>>> = 15360 MB -- 94% avail
>>
>>
>>> Op 31 januari 2017 om 10:22 schreef Martin Palma :
>&g
Hollander [w...@42on.com]
Sent: Tuesday, January 31, 2017 2:35 AM
To: Martin Palma; CEPH list
Subject: Re: [ceph-users] mon.mon01 store is getting too big! 18119 MB
= 15360 MB -- 94% avail
Op 31 januari 2017 om 10:22 schreef Martin Palma :
Hi all,
our cluster is currently performing a big
h-users-boun...@lists.ceph.com] on behalf of Wido den
Hollander [w...@42on.com]
Sent: Tuesday, January 31, 2017 2:35 AM
To: Martin Palma; CEPH list
Subject: Re: [ceph-users] mon.mon01 store is getting too big! 18119 MB >= 15360
MB -- 94% avail
> Op 31 januari 2017 om 10:22 schreef Martin Palma :
>
Hi Wido,
thank you for the clarification. We will wait until recovery is over
we have plenty of space on the mons :-)
Best,
Martin
On Tue, Jan 31, 2017 at 10:35 AM, Wido den Hollander wrote:
>
>> Op 31 januari 2017 om 10:22 schreef Martin Palma :
>>
>>
>> Hi all,
>>
>> our cluster is currently
> Op 31 januari 2017 om 10:22 schreef Martin Palma :
>
>
> Hi all,
>
> our cluster is currently performing a big expansion and is in recovery
> mode (we doubled in size and osd# from 600 TB to 1,2 TB).
>
Yes, that is to be expected. When not all PGs are active+clean the MONs will
not trim th
Hi all,
our cluster is currently performing a big expansion and is in recovery
mode (we doubled in size and osd# from 600 TB to 1,2 TB).
Now we get the following message from our monitor nodes:
mon.mon01 store is getting too big! 18119 MB >= 15360 MB -- 94% avail
Reading [0] it says that it is