Re: [ceph-users] who to repair active+clean+inconsistent+snaptrim?

2017-08-05 Thread Stefan Priebe - Profihost AG
I tried to remove the whole image as i didn't need it.

But it seems it doesn't get cleared: 106dd406b8b4567 was the id of the
old deleted rbd image.

ceph-57]#  find . -name "*106dd406b8b4567*" -exec ls -la "{}" \;
-rw-r--r-- 1 ceph ceph 4194304 Aug  5 09:40
./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_8/rbd\udata.106dd406b8b4567.2315__9d5e4_9E65861A__3
-rw-r--r-- 1 ceph ceph 4194304 Aug  5 09:40
./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_8/rbd\udata.106dd406b8b4567.2315__9d84a_9E65861A__3
-rw-r--r-- 1 ceph ceph 0 Aug  5 11:47
./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_8/rbd\udata.106dd406b8b4567.2315__snapdir_9E65861A__3
-rw-r--r-- 1 ceph ceph 4194304 Aug  5 11:50
./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_A/rbd\udata.106dd406b8b4567.018c__9d455_BCB2A61A__3
-rw-r--r-- 1 ceph ceph 1400832 Aug  5 09:40
./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_A/rbd\udata.106dd406b8b4567.018c__9d5e4_BCB2A61A__3
-rw-r--r-- 1 ceph ceph 1400832 Aug  5 09:40
./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_A/rbd\udata.106dd406b8b4567.018c__9d84a_BCB2A61A__3
-rw-r--r-- 1 ceph ceph 0 Aug  5 11:47
./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_A/rbd\udata.106dd406b8b4567.018c__snapdir_BCB2A61A__3

Stefan

Am 05.08.2017 um 11:02 schrieb Stefan Priebe - Profihost AG:
> Is there a way to remove that object from all osds? As it is unexpected
> it should not harm.
> 
> Greets,
> Stefan
> 
> Am 05.08.2017 um 09:03 schrieb Stefan Priebe - Profihost AG:
>> Hello,
>>
>> i'm trying to fix a cluster where one pg is in
>> active+clean+inconsistent+snaptrim
>>
>> state.
>>
>> The log says:
>> 2017-08-05 08:57:43.240030 osd.20 [ERR] 3.61a repair 0 missing, 1
>> inconsistent objects
>> 2017-08-05 08:57:43.240044 osd.20 [ERR] 3.61a repair 4 errors, 2 fixed
>> 2017-08-05 08:57:43.242828 osd.20 [ERR] trim_object Snap 9d455 not in clones
>>
>> find says:
>>
>> osd.20]# find . -name "*9d455*" -exec ls -la "{}" \;
>> -rw-r--r-- 1 ceph ceph 0 Aug  5 08:57
>> ./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_A/rbd\udata.106dd406b8b4567.018c__9d455_BCB2A61A__3
>>
>> find . -name "*9d455*" -exec ls -la "{}" \;
>> osd.70]# -rw-r--r-- 1 ceph ceph 4194304 Aug  5 08:57
>> ./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_A/rbd\udata.106dd406b8b4567.018c__9d455_BCB2A61A__3
>>
>> osd.57]# find . -name "*9d455*" -exec ls -la "{}" \;
>> # no file found
>>
>> I tried the following stuff to solve this issue.
>>
>> stop osd.y
>> copy file from osd.x to osd.y
>> start osd.y
>>
>> But all that does not help. It stays in this state.
>>
>> The output of a ceph pg repair is:
>> 2017-08-05 08:57:25.715812 osd.20 [ERR] 3.61a shard 20: soid
>> 3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455 data_digest
>> 0x != data_digest 0xbd9585ca from auth oi
>> 3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455(818765'26612094
>> osd.20.0:562819 [9d455] dirty|data_digest|omap_digest s 4194304 uv
>> 26585239 dd bd9585ca od  alloc_hint [0 0]), size 0 != size
>> 4194304 from auth oi
>> 3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455(818765'26612094
>> osd.20.0:562819 [9d455] dirty|data_digest|omap_digest s 4194304 uv
>> 26585239 dd bd9585ca od  alloc_hint [0 0])
>> 2017-08-05 08:57:25.715817 osd.20 [ERR] 3.61a shard 70: soid
>> 3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455 data_digest
>> 0x43d61c5d != data_digest 0x from shard 20, data_digest
>> 0x43d61c5d != data_digest 0xbd9585ca from auth oi
>> 3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455(818765'26612094
>> osd.20.0:562819 [9d455] dirty|data_digest|omap_digest s 4194304 uv
>> 26585239 dd bd9585ca od  alloc_hint [0 0]), size 4194304 != size
>> 0 from shard 20
>> 2017-08-05 08:57:25.715903 osd.20 [ERR] repair 3.61a
>> 3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455 is an
>> unexpected clone
>>
>> How to get this fixed?
>>
>> Greets,
>> Stefan
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] who to repair active+clean+inconsistent+snaptrim?

2017-08-05 Thread Stefan Priebe - Profihost AG
Is there a way to remove that object from all osds? As it is unexpected
it should not harm.

Greets,
Stefan

Am 05.08.2017 um 09:03 schrieb Stefan Priebe - Profihost AG:
> Hello,
> 
> i'm trying to fix a cluster where one pg is in
> active+clean+inconsistent+snaptrim
> 
> state.
> 
> The log says:
> 2017-08-05 08:57:43.240030 osd.20 [ERR] 3.61a repair 0 missing, 1
> inconsistent objects
> 2017-08-05 08:57:43.240044 osd.20 [ERR] 3.61a repair 4 errors, 2 fixed
> 2017-08-05 08:57:43.242828 osd.20 [ERR] trim_object Snap 9d455 not in clones
> 
> find says:
> 
> osd.20]# find . -name "*9d455*" -exec ls -la "{}" \;
> -rw-r--r-- 1 ceph ceph 0 Aug  5 08:57
> ./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_A/rbd\udata.106dd406b8b4567.018c__9d455_BCB2A61A__3
> 
> find . -name "*9d455*" -exec ls -la "{}" \;
> osd.70]# -rw-r--r-- 1 ceph ceph 4194304 Aug  5 08:57
> ./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_A/rbd\udata.106dd406b8b4567.018c__9d455_BCB2A61A__3
> 
> osd.57]# find . -name "*9d455*" -exec ls -la "{}" \;
> # no file found
> 
> I tried the following stuff to solve this issue.
> 
> stop osd.y
> copy file from osd.x to osd.y
> start osd.y
> 
> But all that does not help. It stays in this state.
> 
> The output of a ceph pg repair is:
> 2017-08-05 08:57:25.715812 osd.20 [ERR] 3.61a shard 20: soid
> 3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455 data_digest
> 0x != data_digest 0xbd9585ca from auth oi
> 3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455(818765'26612094
> osd.20.0:562819 [9d455] dirty|data_digest|omap_digest s 4194304 uv
> 26585239 dd bd9585ca od  alloc_hint [0 0]), size 0 != size
> 4194304 from auth oi
> 3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455(818765'26612094
> osd.20.0:562819 [9d455] dirty|data_digest|omap_digest s 4194304 uv
> 26585239 dd bd9585ca od  alloc_hint [0 0])
> 2017-08-05 08:57:25.715817 osd.20 [ERR] 3.61a shard 70: soid
> 3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455 data_digest
> 0x43d61c5d != data_digest 0x from shard 20, data_digest
> 0x43d61c5d != data_digest 0xbd9585ca from auth oi
> 3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455(818765'26612094
> osd.20.0:562819 [9d455] dirty|data_digest|omap_digest s 4194304 uv
> 26585239 dd bd9585ca od  alloc_hint [0 0]), size 4194304 != size
> 0 from shard 20
> 2017-08-05 08:57:25.715903 osd.20 [ERR] repair 3.61a
> 3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455 is an
> unexpected clone
> 
> How to get this fixed?
> 
> Greets,
> Stefan
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] who to repair active+clean+inconsistent+snaptrim?

2017-08-05 Thread Stefan Priebe - Profihost AG
Hello,

i'm trying to fix a cluster where one pg is in
active+clean+inconsistent+snaptrim

state.

The log says:
2017-08-05 08:57:43.240030 osd.20 [ERR] 3.61a repair 0 missing, 1
inconsistent objects
2017-08-05 08:57:43.240044 osd.20 [ERR] 3.61a repair 4 errors, 2 fixed
2017-08-05 08:57:43.242828 osd.20 [ERR] trim_object Snap 9d455 not in clones

find says:

osd.20]# find . -name "*9d455*" -exec ls -la "{}" \;
-rw-r--r-- 1 ceph ceph 0 Aug  5 08:57
./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_A/rbd\udata.106dd406b8b4567.018c__9d455_BCB2A61A__3

find . -name "*9d455*" -exec ls -la "{}" \;
osd.70]# -rw-r--r-- 1 ceph ceph 4194304 Aug  5 08:57
./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_A/rbd\udata.106dd406b8b4567.018c__9d455_BCB2A61A__3

osd.57]# find . -name "*9d455*" -exec ls -la "{}" \;
# no file found

I tried the following stuff to solve this issue.

stop osd.y
copy file from osd.x to osd.y
start osd.y

But all that does not help. It stays in this state.

The output of a ceph pg repair is:
2017-08-05 08:57:25.715812 osd.20 [ERR] 3.61a shard 20: soid
3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455 data_digest
0x != data_digest 0xbd9585ca from auth oi
3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455(818765'26612094
osd.20.0:562819 [9d455] dirty|data_digest|omap_digest s 4194304 uv
26585239 dd bd9585ca od  alloc_hint [0 0]), size 0 != size
4194304 from auth oi
3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455(818765'26612094
osd.20.0:562819 [9d455] dirty|data_digest|omap_digest s 4194304 uv
26585239 dd bd9585ca od  alloc_hint [0 0])
2017-08-05 08:57:25.715817 osd.20 [ERR] 3.61a shard 70: soid
3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455 data_digest
0x43d61c5d != data_digest 0x from shard 20, data_digest
0x43d61c5d != data_digest 0xbd9585ca from auth oi
3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455(818765'26612094
osd.20.0:562819 [9d455] dirty|data_digest|omap_digest s 4194304 uv
26585239 dd bd9585ca od  alloc_hint [0 0]), size 4194304 != size
0 from shard 20
2017-08-05 08:57:25.715903 osd.20 [ERR] repair 3.61a
3:58654d3d:::rbd_data.106dd406b8b4567.018c:9d455 is an
unexpected clone

How to get this fixed?

Greets,
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com