Alright sounds good.

Only one comment then:
>From an IT/ops perspective all I see is ERR and that raises red flags. So
the exposure of the message might need some tweaking. In production I like
to be notified of an issue but have reassurance it was fixed within the
system.

Best Regards

On Wed, Apr 8, 2015 at 8:10 PM Yan, Zheng <[email protected]> wrote:

> On Thu, Apr 9, 2015 at 7:09 AM, Scottix <[email protected]> wrote:
> > I was testing the upgrade on our dev environment and after I restarted
> the
> > mds I got the following errors.
> >
> > 2015-04-08 15:58:34.056470 mds.0 [ERR] unmatched rstat on 605, inode has
> > n(v70 rc2015-03-16 09:11:34.390905), dirfrags have n(v0 rc2015-03-16
> > 09:11:34.390905 1=0+1)
> > 2015-04-08 15:58:34.056530 mds.0 [ERR] unmatched rstat on 604, inode has
> > n(v69 rc2015-03-31 08:07:09.265241), dirfrags have n(v0 rc2015-03-31
> > 08:07:09.265241 1=0+1)
> > 2015-04-08 15:58:34.056581 mds.0 [ERR] unmatched rstat on 606, inode has
> > n(v67 rc2015-03-16 08:54:36.314790), dirfrags have n(v0 rc2015-03-16
> > 08:54:36.314790 1=0+1)
> > 2015-04-08 15:58:34.056633 mds.0 [ERR] unmatched rstat on 607, inode has
> > n(v57 rc2015-03-16 08:54:46.797240), dirfrags have n(v0 rc2015-03-16
> > 08:54:46.797240 1=0+1)
> > 2015-04-08 15:58:34.056687 mds.0 [ERR] unmatched rstat on 608, inode has
> > n(v23 rc2015-03-16 08:54:59.634299), dirfrags have n(v0 rc2015-03-16
> > 08:54:59.634299 1=0+1)
> > 2015-04-08 15:58:34.056737 mds.0 [ERR] unmatched rstat on 609, inode has
> > n(v62 rc2015-03-16 08:55:06.598286), dirfrags have n(v0 rc2015-03-16
> > 08:55:06.598286 1=0+1)
> > 2015-04-08 15:58:34.056789 mds.0 [ERR] unmatched rstat on 600, inode has
> > n(v101 rc2015-03-16 08:55:16.153175), dirfrags have n(v0 rc2015-03-16
> > 08:55:16.153175 1=0+1)
>
> These errors are likely caused by the bug that rstats are not set to
> correct values
> when creating new fs. Nothing to worry about, the MDS automatically fixes
> rstat
> errors.
>
> >
> > I am not sure if this is an issue or got fixed or something I should
> worry
> > about. But would just like some context around this issue since it came
> up
> > in the ceph -w and other users might see it as well.
> >
> > I have done a lot of "unsafe" stuff on this mds so not to freak anyone
> out
> > if that is the issue.
> >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to