On Mon, Oct 14, 2019 at 3:14 PM Florian Haas <[email protected]> wrote:
>
> On 14/10/2019 13:29, Dan van der Ster wrote:
> >> Hi Dan,
> >>
> >> what's in the log is (as far as I can see) consistent with the pg query
> >> output:
> >>
> >> 2019-10-14 08:33:57.345 7f1808fb3700  0 log_channel(cluster) log [DBG] :
> >> 10.10d scrub starts
> >> 2019-10-14 08:33:57.345 7f1808fb3700 -1 log_channel(cluster) log [ERR] :
> >> 10.10d scrub : stat mismatch, got 0/1 objects, 0/0 clones, 0/1 dirty,
> >> 0/0 omap, 0/0 pinned, 0/0 hit_set_archive, 0/0 whiteouts, 0/11 bytes,
> >> 0/0 manifest objects, 0/0 hit_set_archive bytes.
> >> 2019-10-14 08:33:57.345 7f1808fb3700 -1 log_channel(cluster) log [ERR] :
> >> 10.10d scrub 1 errors
> >>
> >> Have you seen this before?
> >
> > Yes occasionally we see stat mismatches -- repair always fixes
> > definitively though.
>
> Not here, sadly. That error keeps coming back, always in the same PG,
> and only in that PG.
>
> > Are you using PG autoscaling? There's a known issue there which
> > generates stat mismatches.
>
> I'd appreciate a link to more information if you have one, but a PG
> autoscaling problem wouldn't really match with the issue already
> appearing in pre-Nautilus releases. :)

https://github.com/ceph/ceph/pull/30479

-- dan
>
> Cheers,
> Florian
>
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to