On Thu, Jul 19, 2018 at 11:51 AM Robert Sander
<r.san...@heinlein-support.de> wrote:
>
> On 19.07.2018 11:15, Ronny Aasen wrote:
>
> > Did you upgrade from 12.2.5 or 12.2.6 ?
>
> Yes.
>
> > sounds like you hit the reason for the 12.2.7 release
> >
> > read : https://ceph.com/releases/12-2-7-luminous-released/
> >
> > there should come features in 12.2.8 that can deal with the "objects are
> > in sync but checksums are wrong" scenario.
>
> I already read that before the upgrade but did not consider to be
> affected by the bug.
>
> The pools with the inconsistent PGs only have RBDs stored and not CephFS
> nor RGW data.
>
> I have restarted the OSDs with "osd skip data digest = true" as a "ceph
> tell" is not able to inject this argument into the running processes.
>
> Let's see if this works out.

If you upgraded from 12.2.6 and have bluestore osds, then you would be
affected by the *first* of the two issues described in the release
notes, regardless of rgw/rbd/cephfs use-cases.

This paragraph applies:

```
If your cluster includes BlueStore OSDs and was affected, deep scrubs
will generate errors about mismatched CRCs for affected objects.
Currently the repair operation does not know how to correct them
(since all replicas do not match the expected checksum it does not
know how to proceed). These warnings are harmless in the sense that
IO is not affected and the replicas are all still in sync. The number
of affected objects is likely to drop (possibly to zero) on their own
over time as those objects are modified. We expect to include a scrub
improvement in v12.2.8 to clean up any remaining objects.
```
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to