No. This will no help (((
I try to found data, but it's look exist with same time stamp on all osd or
missing on all osd ...

So, need advice , what I need to do...

вторник, 18 августа 2015 г. пользователь Abhishek L написал:

>
> Voloshanenko Igor writes:
>
> > Hi Irek, Please read careful )))
> > You proposal was the first, i try to do...  That's why i asked about
> > help... (
> >
> > 2015-08-18 8:34 GMT+03:00 Irek Fasikhov <malm...@gmail.com
> <javascript:;>>:
> >
> >> Hi, Igor.
> >>
> >> You need to repair the PG.
> >>
> >> for i in `ceph pg dump| grep inconsistent | grep -v
> 'inconsistent+repair'
> >> | awk {'print$1'}`;do ceph pg repair $i;done
> >>
> >> С уважением, Фасихов Ирек Нургаязович
> >> Моб.: +79229045757
> >>
> >> 2015-08-18 8:27 GMT+03:00 Voloshanenko Igor <
> igor.voloshane...@gmail.com <javascript:;>>:
> >>
> >>> Hi all, at our production cluster, due high rebalancing ((( we have 2
> pgs
> >>> in inconsistent state...
> >>>
> >>> root@temp:~# ceph health detail | grep inc
> >>> HEALTH_ERR 2 pgs inconsistent; 18 scrub errors
> >>> pg 2.490 is active+clean+inconsistent, acting [56,15,29]
> >>> pg 2.c4 is active+clean+inconsistent, acting [56,10,42]
> >>>
> >>> From OSD logs, after recovery attempt:
> >>>
> >>> root@test:~# ceph pg dump | grep -i incons | cut -f 1 | while read i;
> do
> >>> ceph pg repair ${i} ; done
> >>> dumped all in format plain
> >>> instructing pg 2.490 on osd.56 to repair
> >>> instructing pg 2.c4 on osd.56 to repair
> >>>
> >>> /var/log/ceph/ceph-osd.56.log:51:2015-08-18 07:26:37.035910
> 7f94663b3700
> >>> -1 log_channel(cluster) log [ERR] : deep-scrub 2.490
> >>> f5759490/rbd_data.1631755377d7e.00000000000004da/head//2 expected clone
> >>> 90c59490/rbd_data.eb486436f2beb.0000000000007a65/141//2
> >>> /var/log/ceph/ceph-osd.56.log:52:2015-08-18 07:26:37.035960
> 7f94663b3700
> >>> -1 log_channel(cluster) log [ERR] : deep-scrub 2.490
> >>> fee49490/rbd_data.12483d3ba0794b.000000000000522f/head//2 expected
> clone
> >>> f5759490/rbd_data.1631755377d7e.00000000000004da/141//2
> >>> /var/log/ceph/ceph-osd.56.log:53:2015-08-18 07:26:37.036133
> 7f94663b3700
> >>> -1 log_channel(cluster) log [ERR] : deep-scrub 2.490
> >>> a9b39490/rbd_data.12483d3ba0794b.00000000000037b3/head//2 expected
> clone
> >>> fee49490/rbd_data.12483d3ba0794b.000000000000522f/141//2
> >>> /var/log/ceph/ceph-osd.56.log:54:2015-08-18 07:26:37.036243
> 7f94663b3700
> >>> -1 log_channel(cluster) log [ERR] : deep-scrub 2.490
> >>> bac19490/rbd_data.1238e82ae8944a.000000000000032e/head//2 expected
> clone
> >>> a9b39490/rbd_data.12483d3ba0794b.00000000000037b3/141//2
> >>> /var/log/ceph/ceph-osd.56.log:55:2015-08-18 07:26:37.036289
> 7f94663b3700
> >>> -1 log_channel(cluster) log [ERR] : deep-scrub 2.490
> >>> 98519490/rbd_data.123e9c2ae8944a.0000000000000807/head//2 expected
> clone
> >>> bac19490/rbd_data.1238e82ae8944a.000000000000032e/141//2
> >>> /var/log/ceph/ceph-osd.56.log:56:2015-08-18 07:26:37.036314
> 7f94663b3700
> >>> -1 log_channel(cluster) log [ERR] : deep-scrub 2.490
> >>> c3c09490/rbd_data.1238e82ae8944a.0000000000000c2b/head//2 expected
> clone
> >>> 98519490/rbd_data.123e9c2ae8944a.0000000000000807/141//2
> >>> /var/log/ceph/ceph-osd.56.log:57:2015-08-18 07:26:37.036363
> 7f94663b3700
> >>> -1 log_channel(cluster) log [ERR] : deep-scrub 2.490
> >>> 28809490/rbd_data.edea7460fe42b.00000000000001d9/head//2 expected clone
> >>> c3c09490/rbd_data.1238e82ae8944a.0000000000000c2b/141//2
> >>> /var/log/ceph/ceph-osd.56.log:58:2015-08-18 07:26:37.036432
> 7f94663b3700
> >>> -1 log_channel(cluster) log [ERR] : deep-scrub 2.490
> >>> e1509490/rbd_data.1423897545e146.00000000000009a6/head//2 expected
> clone
> >>> 28809490/rbd_data.edea7460fe42b.00000000000001d9/141//2
> >>> /var/log/ceph/ceph-osd.56.log:59:2015-08-18 07:26:38.548765
> 7f94663b3700
> >>> -1 log_channel(cluster) log [ERR] : 2.490 deep-scrub 17 errors
> >>>
> >>> So, how i can solve "expected clone" situation by hand?
> >>> Thank in advance!
>
> I've had an inconsistent pg once, but it was a different sort of an
> error (some sort of digest mismatch, where the secondary object copies
> had later timestamps). This was fixed by moving the object away and
> restarting, the osd which got fixed when the osd peered, similar to what
> was mentioned in Sebastian Han's blog[1].
>
> I'm guessing the same method will solve this error as well, but not
> completely sure, maybe someone else who has seen this particular error
> could guide you better.
>
> [1]:
> http://www.sebastien-han.fr/blog/2015/04/27/ceph-manually-repair-object/
>
> --
> Abhishek
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to