Hey guys, this is probably a really silly question, but I’m trying to reconcile 
where all of my space has gone in one cluster that I am responsible for.

The cluster is made up of 36 2TB SSDs across 3 nodes (12 OSDs per node), all 
using FileStore on XFS.  We are running Ceph Luminous 12.2.8 on this particular 
cluster. The only pool where data is heavily stored is the “rbd” pool, of which 
7.09TiB is consumed.  With a replication of “3”, I would expect that the raw 
used to be close to 21TiB, but it’s actually closer to 35TiB.  Some additional 
details are below.  Any thoughts?

[cluster]root at dashboard  
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>:~# ceph df
GLOBAL:
     SIZE        AVAIL       RAW USED     %RAW USED
     62.8TiB     27.8TiB      35.1TiB         55.81
POOLS:
     NAME                           ID     USED        %USED     MAX AVAIL     
OBJECTS
     rbd                            0      7.09TiB     53.76       6.10TiB     
3056783
     data                           3      29.4GiB      0.47       6.10TiB      
  7918
     metadata                       4      57.2MiB         0       6.10TiB      
    95
     .rgw.root                      5      1.09KiB         0       6.10TiB      
     4
     default.rgw.control            6           0B         0       6.10TiB      
     8
     default.rgw.meta               7           0B         0       6.10TiB      
     0
     default.rgw.log                8           0B         0       6.10TiB      
   207
     default.rgw.buckets.index      9           0B         0       6.10TiB      
     0
     default.rgw.buckets.data       10          0B         0       6.10TiB      
     0
     default.rgw.buckets.non-ec     11          0B         0       6.10TiB      
     0

[cluster]root at dashboard  
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>:~# ceph --version
ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)

[cluster]root at dashboard  
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>:~# ceph osd dump | 
grep 'replicated size'
pool 0 'rbd' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins 
pg_num 682 pgp_num 682 last_change 414873 flags hashpspool 
min_write_recency_for_promote 1 stripe_width 0 application rbd
pool 3 'data' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins 
pg_num 682 pgp_num 682 last_change 409614 flags hashpspool 
crash_replay_interval 45 min_write_recency_for_promote 1 stripe_width 0 
application cephfs
pool 4 'metadata' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 682 pgp_num 682 last_change 409617 flags hashpspool 
min_write_recency_for_promote 1 stripe_width 0 application cephfs
pool 5 '.rgw.root' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 409 pgp_num 409 last_change 409710 lfor 0/336229 flags 
hashpspool stripe_width 0 application rgw
pool 6 'default.rgw.control' replicated size 3 min_size 1 crush_rule 0 
object_hash rjenkins pg_num 409 pgp_num 409 last_change 409711 lfor 0/336232 
flags hashpspool stripe_width 0 application rgw
pool 7 'default.rgw.meta' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 409 pgp_num 409 last_change 409713 lfor 0/336235 flags 
hashpspool stripe_width 0 application rgw
pool 8 'default.rgw.log' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 409 pgp_num 409 last_change 409712 lfor 0/336238 flags 
hashpspool stripe_width 0 application rgw
pool 9 'default.rgw.buckets.index' replicated size 3 min_size 1 crush_rule 0 
object_hash rjenkins pg_num 409 pgp_num 409 last_change 409714 lfor 0/336241 
flags hashpspool stripe_width 0 application rgw
pool 10 'default.rgw.buckets.data' replicated size 3 min_size 1 crush_rule 0 
object_hash rjenkins pg_num 409 pgp_num 409 last_change 409715 lfor 0/336244 
flags hashpspool stripe_width 0 application rgw
pool 11 'default.rgw.buckets.non-ec' replicated size 3 min_size 1 crush_rule 0 
object_hash rjenkins pg_num 409 pgp_num 409 last_change 409716 lfor 0/336247 
flags hashpspool stripe_width 0 application rgw

[cluster]root at dashboard  
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>:~# ceph osd lspools
0 rbd,3 data,4 metadata,5 .rgw.root,6 default.rgw.control,7 default.rgw.meta,8 
default.rgw.log,9 default.rgw.buckets.index,10 default.rgw.buckets.data,11 
default.rgw.buckets.non-ec,

[cluster]root at dashboard  
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>:~# rados df
POOL_NAME                  USED    OBJECTS CLONES  COPIES  MISSING_ON_PRIMARY 
UNFOUND DEGRADED RD_OPS      RD      WR_OPS      WR
.rgw.root                  1.09KiB       4       0      12                  0   
    0        0          12    8KiB           0      0B
data                       29.4GiB    7918       0   23754                  0   
    0        0     1414777 3.74TiB     3524833 4.54TiB
default.rgw.buckets.data        0B       0       0       0                  0   
    0        0           0      0B           0      0B
default.rgw.buckets.index       0B       0       0       0                  0   
    0        0           0      0B           0      0B
default.rgw.buckets.non-ec      0B       0       0       0                  0   
    0        0           0      0B           0      0B
default.rgw.control             0B       8       0      24                  0   
    0        0           0      0B           0      0B
default.rgw.log                 0B     207       0     621                  0   
    0        0    21644149 20.6GiB    14422618      0B
default.rgw.meta                0B       0       0       0                  0   
    0        0           0      0B           0      0B
metadata                   57.2MiB      95       0     285                  0   
    0        0         780  189MiB       86885  476MiB
rbd                        7.09TiB 3053998 1539909 9161994                  0   
    0        0 23432304830 1.07PiB 11174458128  232TiB

total_objects    3062230
total_used       35.0TiB
total_avail      27.8TiB
total_space      62.8TiB

[cluster]root at dashboard  
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>:~# for pool in `rados 
lspools`; do echo $pool; ceph osd pool get $pool size; echo; done
rbd
size: 3
data
size: 3
metadata
size: 3
.rgw.root
size: 3
default.rgw.control
size: 3
default.rgw.meta
size: 3
default.rgw.log
size: 3
default.rgw.buckets.index
size: 3
default.rgw.buckets.data
size: 3
default.rgw.buckets.non-ec
size: 3


Your rbd pool have clones. Lookup to rbd snapshots.



k

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to