Hi Stefan,
Hi Everyone,
I am in a similar situation like you were a year ago. During some
backfilling we removed an old snapshot and with the next deep-scrub we
ended with the same log as you did.
> deep-scrub 2.61b
2:d8736536:::rbd_data.e22260238e1f29.0046d527:177f6 : is an
unexpected cl
Am 26.02.2018 um 09:54 schrieb Saverio Proto:
> Hello Stefan,
>
> ceph-object-tool does not exist on my setup, do yo mean the command
> /usr/bin/ceph-objectstore-tool that is installed with the ceph-osd package ?
Yes sorry i meant the ceph-objectstore-tool tool. With that you can
remove objects.
Hello Stefan,
ceph-object-tool does not exist on my setup, do yo mean the command
/usr/bin/ceph-objectstore-tool that is installed with the ceph-osd package ?
I have the following situation here in Ceph Luminous:
2018-02-26 07:15:30.066393 7f0684acb700 -1 log_channel(cluster) log
[ERR] : 5.111f
I encountered this same issue on two different clusters running Hammer 0.94.9
last week. In both cases I was able to resolve it by deleting (moving) all
replicas of the unexpected clone manually and issuing a pg repair. Which
version did you see this on? A call stack for the resulting crash woul
Hello Greg,
Am 08.08.2017 um 11:56 schrieb Gregory Farnum:
> On Mon, Aug 7, 2017 at 11:55 PM Stefan Priebe - Profihost AG
> mailto:s.pri...@profihost.ag>> wrote:
>
> Hello,
>
> how can i fix this one:
>
> 2017-08-08 08:42:52.265321 osd.20 [ERR] repair 3.61a
> 3:58654d3d:::rbd_da
On Mon, Aug 7, 2017 at 11:55 PM Stefan Priebe - Profihost AG <
s.pri...@profihost.ag> wrote:
> Hello,
>
> how can i fix this one:
>
> 2017-08-08 08:42:52.265321 osd.20 [ERR] repair 3.61a
> 3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455 is an
> unexpected clone
> 2017-08-08 08:43:04.9
Hello,
how can i fix this one:
2017-08-08 08:42:52.265321 osd.20 [ERR] repair 3.61a
3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455 is an
unexpected clone
2017-08-08 08:43:04.914640 mon.0 [INF] HEALTH_ERR; 1 pgs inconsistent; 1
pgs repair; 1 scrub errors
2017-08-08 08:43:33.470246 os