Hi,
took quite some time to recover the pgs, and indeed the problem with the
mds instances was due to the activating pgs. Once they were cleared the
fs went back to the original state.
I had to restart a few times some OSds though, in order to get all the
pgs activated, and I didn't hit the limits
Hi,
On 01/08/2018 05:40 PM, Alessandro De Salvo wrote:
Thanks Lincoln,
indeed, as I said the cluster is recovering, so there are pending ops:
pgs: 21.034% pgs not active
1692310/24980804 objects degraded (6.774%)
5612149/24980804 objects misplaced (22.466%)
From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Alessandro De
Salvo <alessandro.desa...@roma1.infn.it>
Sent: Monday, January 8, 2018 7:40:59 PM
To: Lincoln Bryant; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] cephfs degraded on ceph luminous 12.2.2
Thanks Linco
Thanks Lincoln,
indeed, as I said the cluster is recovering, so there are pending ops:
pgs: 21.034% pgs not active
1692310/24980804 objects degraded (6.774%)
5612149/24980804 objects misplaced (22.466%)
458 active+clean
329
Hi Alessandro,
What is the state of your PGs? Inactive PGs have blocked CephFS
recovery on our cluster before. I'd try to clear any blocked ops and
see if the MDSes recover.
--Lincoln
On Mon, 2018-01-08 at 17:21 +0100, Alessandro De Salvo wrote:
> Hi,
>
> I'm running on ceph luminous 12.2.2