Didnt work for me. Downgraded and mds won't start.

I also needed to:

   rados -p cephfs_metadata rm mds0_openfiles.0

or else mds daemon crashed.

The crash info didn't show any useful information (for me). I couldn't figure this out without Zheng Yan help.


On 17/10/18 17:36, Paul Emmerich wrote:
CephFS will be offline and show up as "damaged" in ceph -s
The fix is to downgrade to 13.2.1 and issue a "ceph fs repaired <rank>" command.


Paul

Am Mi., 17. Okt. 2018 um 21:53 Uhr schrieb Michael Sudnick
<michael.sudn...@gmail.com>:
What exactly are the symptoms of the problem? I use cephfs with 13.2.2 with two 
active MDS daemons and at least on the surface everything looks fine. Is there 
anything I should avoid doing until 13.2.3?

On Wed, Oct 17, 2018, 14:10 Patrick Donnelly <pdonn...@redhat.com> wrote:
On Wed, Oct 17, 2018 at 11:05 AM Alexandre DERUMIER <aderum...@odiso.com> wrote:
Hi,

Is it possible to have more infos or announce about this problem ?

I'm currently waiting to migrate from luminious to mimic, (I need new quota 
feature for cephfs)

is it safe to upgrade to 13.2.2 ?

or better to wait to 13.2.3 ? or install 13.2.1 for now ?
Upgrading to 13.2.1 would be safe.

--
Patrick Donnelly
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Alfredo Daniel Rezinovsky
Director de Tecnologías de Información y Comunicaciones
Facultad de Ingeniería - Universidad Nacional de Cuyo

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to