CephFS will be offline and show up as "damaged" in ceph -s The fix is to downgrade to 13.2.1 and issue a "ceph fs repaired <rank>" command.
Paul Am Mi., 17. Okt. 2018 um 21:53 Uhr schrieb Michael Sudnick <[email protected]>: > > What exactly are the symptoms of the problem? I use cephfs with 13.2.2 with > two active MDS daemons and at least on the surface everything looks fine. Is > there anything I should avoid doing until 13.2.3? > > On Wed, Oct 17, 2018, 14:10 Patrick Donnelly <[email protected]> wrote: >> >> On Wed, Oct 17, 2018 at 11:05 AM Alexandre DERUMIER <[email protected]> >> wrote: >> > >> > Hi, >> > >> > Is it possible to have more infos or announce about this problem ? >> > >> > I'm currently waiting to migrate from luminious to mimic, (I need new >> > quota feature for cephfs) >> > >> > is it safe to upgrade to 13.2.2 ? >> > >> > or better to wait to 13.2.3 ? or install 13.2.1 for now ? >> >> Upgrading to 13.2.1 would be safe. >> >> -- >> Patrick Donnelly >> _______________________________________________ >> ceph-users mailing list >> [email protected] >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ > ceph-users mailing list > [email protected] > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 _______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
