Hi,

Is there any config on Ceph that block/not perform space reclaim?
I test on one pool which has only one image 1.8 TiB in used.


rbd $p du im/root
warning: fast-diff map is not enabled for root. operation may be slow.
NAME            PROVISIONED USED
root     2.2 TiB 1.8 TiB



I already removed all snaphots and now pool has only one image alone.
I run both fstrim  over the filesystem (XFS) and try rbd sparsify im/root  
(don't know what it is exactly but it mentions to reclaim something)
It still shows the pool used 6.9 TiB which totally not make sense right? It 
should be up to 3.6 (1.8 * 2) according to its replica?



POOLS:
    POOL                                 ID     PGS     STORED      OBJECTS     
USED         %USED     MAX AVAIL     QUOTA OBJECTS     QUOTA BYTES     DIRTY    
   USED COMPR     UNDER COMPR
    im         19      32     3.5 TiB     918.34k      6.9 TiB      4.80        
69 TiB     N/A               10 TiB          918.34k            0 B             
0 B



I think now some of others pool have this issue too, we do clean up a lot but 
seems space not reclaimed.
I estimate more than 50 TiB  should be able to reclaim, actual usage of this 
cluster much less than current reported number.

Thank you for your help.

________________________________
This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to