Hello Greg,

i deleted the image 12 hours ago and it had only 120GB... Do you think i
should wait more?

Yes that osd is part of the pg:
ceph pg map 3.61a
osdmap e819444 pg 3.61a (3.61a) -> up [20,57,70] acting [20,57,70]

but:
# ceph pg ls inconsistent
pg_stat objects mip     degr    misp    unf     bytes   log     disklog
state   state_stamp     v       reported        up      up_primary
acting  acting_primary  last_scrub      scrub_stamp     last_deep_scrub
deep_scrub_stamp
3.61a   1598    0       0       0       0       5695628800      3030
3030    active+clean+inconsistent+snaptrim      2017-08-05
20:59:31.980515  819444'26619516 819444:33354835 [20,57,70]      20
[20,57,70]      20      819444'26619344 2017-08-05 20:59:31.980489
819444'26619344 2017-08-05 20:59:31.980489

and a repair does not work. It says it has those unknown clones which i
listed below.

Greets,
Stefan


Am 05.08.2017 um 21:43 schrieb Gregory Farnum:
> is OSD 20 actually a member of the pg right now? It could be stray data
> that is slowly getting cleaned up.
> 
> Also, you've got "snapdir" listings there. Those indicate the object is
> snapshotted but the "head" got deleted. So it may just be delayed
> cleanup of snapshots.
> 
> On Sat, Aug 5, 2017 at 12:34 PM Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag <mailto:s.pri...@profihost.ag>> wrote:
>
>     Hello,
> 
>     today i deleted an rbd image which had the following
>     prefix:
> 
>             block_name_prefix: rbd_data.106dd406b8b4567
> 
>     the rm command went fine.
> 
>     also the rados list command does not show any objects with that string:
>     # rados -p rbd ls | grep 106dd406b8b4567
> 
>     But find on an osd still has them?
> 
>     osd.20]#  find . -name "*106dd406b8b4567*" -exec ls -la "{}" \;
>     -rw-r--r-- 1 ceph ceph 4194304 Aug  5 09:32
>     
> ./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_8/rbd\udata.106dd406b8b4567.0000000000002315__9d5e4_9E65861A__3
>     -rw-r--r-- 1 ceph ceph 4194304 Aug  5 09:36
>     
> ./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_8/rbd\udata.106dd406b8b4567.0000000000002315__9d84a_9E65861A__3
>     -rw-r--r-- 1 ceph ceph 0 Aug  5 11:47
>     
> ./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_8/rbd\udata.106dd406b8b4567.0000000000002315__snapdir_9E65861A__3
>     -rw-r--r-- 1 ceph ceph 4194304 Aug  5 09:49
>     
> ./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_A/rbd\udata.106dd406b8b4567.000000000000018c__9d455_BCB2A61A__3
>     -rw-r--r-- 1 ceph ceph 1400832 Aug  5 09:32
>     
> ./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_A/rbd\udata.106dd406b8b4567.000000000000018c__9d5e4_BCB2A61A__3
>     -rw-r--r-- 1 ceph ceph 1400832 Aug  5 09:32
>     
> ./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_A/rbd\udata.106dd406b8b4567.000000000000018c__9d84a_BCB2A61A__3
>     -rw-r--r-- 1 ceph ceph 0 Aug  5 11:47
>     
> ./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_A/rbd\udata.106dd406b8b4567.000000000000018c__snapdir_BCB2A61A__3
> 
>     Greets,
>     Stefan
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to