try running "rados -p <cephfs_metadata> touch 1002fc5d22d.00000000"
before mds restart

On Thu, May 3, 2018 at 2:31 AM, Pavan, Krish <krish.pa...@nuance.com> wrote:
>
>
> We have ceph 12.2.4 cephfs with two active MDS server and directory are
> pinned  to MDS servers. Yesterday MDS server crashed.  Once all fuse clients
> have  unmounted, we bring back MDS online. Both MDS are active now.
>
>
>
> Once It came back, we started to see one MDS is   Readonly.
>
> …
>
> 2018-05-01 23:41:22.765920 7f71481b8700  1 mds.0.cache.dir(0x1002fc5d22d)
> commit error -2 v 3
>
> 2018-05-01 23:41:22.765964 7f71481b8700 -1 log_channel(cluster) log [ERR] :
> failed to commit dir 0x1002fc5d22d object, errno -2
>
> 2018-05-01 23:41:22.765974 7f71481b8700 -1 mds.0.222755 unhandled write
> error (2) No such file or directory, force readonly...
>
> 2018-05-01 23:41:22.766013 7f71481b8700  1 mds.0.cache force file system
> read-only
>
> 2018-05-01 23:41:22.766019 7f71481b8700  0 log_channel(cluster) log [WRN] :
> force file system read-only
>
> ….
>
>
>
> It health waring I see
>
>     health: HEALTH_WARN
>
>             1 MDSs are read only
>
>             1 MDSs behind on trimming
>
>
>
> There is no error on OSDs on metadata pool
>
> Will ceph daemon mds.x scrub_path / force recursive repair will fix?.    Or
> offline data-scan need to be done
>
>
>
>
>
>
>
> Regards
>
> Krish
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to