On Wed, Jun 17, 2015 at 8:56 AM, Dan van der Ster <d...@vanderster.com> wrote:
> Hi,
>
> After upgrading to 0.94.2 yesterday on our test cluster, we've had 3
> PGs go inconsistent.
>
> First, immediately after we updated the OSDs PG 34.10d went inconsistent:
>
> 2015-06-16 13:42:19.086170 osd.52 137.138.39.211:6806/926964 2 :
> cluster [ERR] 34.10d scrub stat mismatch, got 4/5 objects, 0/0 clones,
> 0/0 dirty, 0/0 omap, 0/0 hit_set_archive, 0/0 whiteouts, 136/136
> bytes,0/0 hit_set_archive bytes.
>
> Second, an hour later 55.10d went inconsistent:
>
> 2015-06-16 14:27:58.336550 osd.303 128.142.23.56:6812/879385 10 :
> cluster [ERR] 55.10d deep-scrub stat mismatch, got 0/1 objects, 0/0
> clones, 0/1 dirty, 0/0 omap, 0/0 hit_set_archive, 0/0 whiteouts, 0/0
> bytes,0/0 hit_set_archive bytes.
>
> Then last night 36.10d suffered the same fate:
>
> 2015-06-16 23:05:17.857433 osd.30 188.184.18.39:6800/2260103 16 :
> cluster [ERR] 36.10d deep-scrub stat mismatch, got 5833/5834 objects,
> 0/0 clones, 5758/5759 dirty, 0/0 omap, 0/0 hit_set_archive, 0/0
> whiteouts, 24126649216/24130843520 bytes,0/0 hit_set_archive bytes.
>
>
> In all cases, one object is missing. In all cases, the PG id is 10d.
> Is this an epic coincidence or could something else going on here?

I'm betting on something else. What OSDs is each PG mapped to?
It looks like each of them is missing one object on some of the OSDs,
what are the objects?
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to