Re: [ceph-users] Ceph capacity versus pool replicated size discrepancy?

2019-08-14 Thread Konstantin Shalygin

On 8/14/19 6:19 PM, Kenneth Van Alstyne wrote:
Got it!  I can calculate individual clone usage using “rbd du”, but 
does anything exist to show total clone usage across the pool? 
 Otherwise it looks like phantom space is just missing. 


rbd du for each snapshot, I think...




k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph capacity versus pool replicated size discrepancy?

2019-08-14 Thread Kenneth Van Alstyne
Got it!  I can calculate individual clone usage using “rbd du”, but does 
anything exist to show total clone usage across the pool?  Otherwise it looks 
like phantom space is just missing.

Thanks,

--
Kenneth Van Alstyne
Systems Architect
M: 228.547.8045
15052 Conference Center Dr, Chantilly, VA 20151
perspecta

On Aug 13, 2019, at 11:05 PM, Konstantin Shalygin 
mailto:k0...@k0ste.ru>> wrote:



Hey guys, this is probably a really silly question, but I’m trying to reconcile 
where all of my space has gone in one cluster that I am responsible for.

The cluster is made up of 36 2TB SSDs across 3 nodes (12 OSDs per node), all 
using FileStore on XFS.  We are running Ceph Luminous 12.2.8 on this particular 
cluster. The only pool where data is heavily stored is the “rbd” pool, of which 
7.09TiB is consumed.  With a replication of “3”, I would expect that the raw 
used to be close to 21TiB, but it’s actually closer to 35TiB.  Some additional 
details are below.  Any thoughts?

[cluster] root at 
dashboard:~# ceph df
GLOBAL:
SIZEAVAIL   RAW USED %RAW USED
62.8TiB 27.8TiB  35.1TiB 55.81
POOLS:
NAME   ID USED%USED MAX AVAIL 
OBJECTS
rbd0  7.09TiB 53.76   6.10TiB 
3056783
data   3  29.4GiB  0.47   6.10TiB   
 7918
metadata   4  57.2MiB 0   6.10TiB   
   95
.rgw.root  5  1.09KiB 0   6.10TiB   
4
default.rgw.control6   0B 0   6.10TiB   
8
default.rgw.meta   7   0B 0   6.10TiB   
0
default.rgw.log8   0B 0   6.10TiB   
  207
default.rgw.buckets.index  9   0B 0   6.10TiB   
0
default.rgw.buckets.data   10  0B 0   6.10TiB   
0
default.rgw.buckets.non-ec 11  0B 0   6.10TiB   
0

[cluster] root at 
dashboard:~# ceph 
--version
ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)

[cluster] root at 
dashboard:~# ceph osd 
dump | grep 'replicated size'
pool 0 'rbd' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins 
pg_num 682 pgp_num 682 last_change 414873 flags hashpspool 
min_write_recency_for_promote 1 stripe_width 0 application rbd
pool 3 'data' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins 
pg_num 682 pgp_num 682 last_change 409614 flags hashpspool 
crash_replay_interval 45 min_write_recency_for_promote 1 stripe_width 0 
application cephfs
pool 4 'metadata' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 682 pgp_num 682 last_change 409617 flags hashpspool 
min_write_recency_for_promote 1 stripe_width 0 application cephfs
pool 5 '.rgw.root' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 409 pgp_num 409 last_change 409710 lfor 0/336229 flags 
hashpspool stripe_width 0 application rgw
pool 6 'default.rgw.control' replicated size 3 min_size 1 crush_rule 0 
object_hash rjenkins pg_num 409 pgp_num 409 last_change 409711 lfor 0/336232 
flags hashpspool stripe_width 0 application rgw
pool 7 'default.rgw.meta' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 409 pgp_num 409 last_change 409713 lfor 0/336235 flags 
hashpspool stripe_width 0 application rgw
pool 8 'default.rgw.log' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 409 pgp_num 409 last_change 409712 lfor 0/336238 flags 
hashpspool stripe_width 0 application rgw
pool 9 'default.rgw.buckets.index' replicated size 3 min_size 1 crush_rule 0 
object_hash rjenkins pg_num 409 pgp_num 409 last_change 409714 lfor 0/336241 
flags hashpspool stripe_width 0 application rgw
pool 10 'default.rgw.buckets.data' replicated size 3 min_size 1 crush_rule 0 
object_hash rjenkins pg_num 409 pgp_num 409 last_change 409715 lfor 0/336244 
flags hashpspool stripe_width 0 application rgw
pool 11 'default.rgw.buckets.non-ec' replicated size 3 min_size 1 crush_rule 0 
object_hash rjenkins pg_num 409 pgp_num 409 last_change 409716 lfor 0/336247 
flags hashpspool stripe_width 0 application rgw

[cluster] root at 
dashboard:~# ceph osd 
lspools
0 rbd,3 data,4 metadata,5 .rgw.root,6 default.rgw.control,7 default.rgw.meta,8 
default.rgw.log,9 default.rgw.buckets.index,10 default.rgw.buckets.data,11 
default.rgw.buckets.non-ec,

[cluster] root at 
dashboard:~# rados df
POOL_NAME  USEDOBJECTS CLONES  COPIES  MISSING_ON_PRIMARY 
UNFOUND DEGRADED RD_OPS  RD

Re: [ceph-users] Ceph capacity versus pool replicated size discrepancy?

2019-08-13 Thread Konstantin Shalygin

Hey guys, this is probably a really silly question, but I’m trying to reconcile 
where all of my space has gone in one cluster that I am responsible for.

The cluster is made up of 36 2TB SSDs across 3 nodes (12 OSDs per node), all 
using FileStore on XFS.  We are running Ceph Luminous 12.2.8 on this particular 
cluster. The only pool where data is heavily stored is the “rbd” pool, of which 
7.09TiB is consumed.  With a replication of “3”, I would expect that the raw 
used to be close to 21TiB, but it’s actually closer to 35TiB.  Some additional 
details are below.  Any thoughts?

[cluster]root at dashboard  
:~# ceph df
GLOBAL:
 SIZEAVAIL   RAW USED %RAW USED
 62.8TiB 27.8TiB  35.1TiB 55.81
POOLS:
 NAME   ID USED%USED MAX AVAIL 
OBJECTS
 rbd0  7.09TiB 53.76   6.10TiB 
3056783
 data   3  29.4GiB  0.47   6.10TiB  
  7918
 metadata   4  57.2MiB 0   6.10TiB  
95
 .rgw.root  5  1.09KiB 0   6.10TiB  
 4
 default.rgw.control6   0B 0   6.10TiB  
 8
 default.rgw.meta   7   0B 0   6.10TiB  
 0
 default.rgw.log8   0B 0   6.10TiB  
   207
 default.rgw.buckets.index  9   0B 0   6.10TiB  
 0
 default.rgw.buckets.data   10  0B 0   6.10TiB  
 0
 default.rgw.buckets.non-ec 11  0B 0   6.10TiB  
 0

[cluster]root at dashboard  
:~# ceph --version
ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)

[cluster]root at dashboard  
:~# ceph osd dump | 
grep 'replicated size'
pool 0 'rbd' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins 
pg_num 682 pgp_num 682 last_change 414873 flags hashpspool 
min_write_recency_for_promote 1 stripe_width 0 application rbd
pool 3 'data' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins 
pg_num 682 pgp_num 682 last_change 409614 flags hashpspool 
crash_replay_interval 45 min_write_recency_for_promote 1 stripe_width 0 
application cephfs
pool 4 'metadata' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 682 pgp_num 682 last_change 409617 flags hashpspool 
min_write_recency_for_promote 1 stripe_width 0 application cephfs
pool 5 '.rgw.root' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 409 pgp_num 409 last_change 409710 lfor 0/336229 flags 
hashpspool stripe_width 0 application rgw
pool 6 'default.rgw.control' replicated size 3 min_size 1 crush_rule 0 
object_hash rjenkins pg_num 409 pgp_num 409 last_change 409711 lfor 0/336232 
flags hashpspool stripe_width 0 application rgw
pool 7 'default.rgw.meta' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 409 pgp_num 409 last_change 409713 lfor 0/336235 flags 
hashpspool stripe_width 0 application rgw
pool 8 'default.rgw.log' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 409 pgp_num 409 last_change 409712 lfor 0/336238 flags 
hashpspool stripe_width 0 application rgw
pool 9 'default.rgw.buckets.index' replicated size 3 min_size 1 crush_rule 0 
object_hash rjenkins pg_num 409 pgp_num 409 last_change 409714 lfor 0/336241 
flags hashpspool stripe_width 0 application rgw
pool 10 'default.rgw.buckets.data' replicated size 3 min_size 1 crush_rule 0 
object_hash rjenkins pg_num 409 pgp_num 409 last_change 409715 lfor 0/336244 
flags hashpspool stripe_width 0 application rgw
pool 11 'default.rgw.buckets.non-ec' replicated size 3 min_size 1 crush_rule 0 
object_hash rjenkins pg_num 409 pgp_num 409 last_change 409716 lfor 0/336247 
flags hashpspool stripe_width 0 application rgw

[cluster]root at dashboard  
:~# ceph osd lspools
0 rbd,3 data,4 metadata,5 .rgw.root,6 default.rgw.control,7 default.rgw.meta,8 
default.rgw.log,9 default.rgw.buckets.index,10 default.rgw.buckets.data,11 
default.rgw.buckets.non-ec,

[cluster]root at dashboard  
:~# rados df
POOL_NAME  USEDOBJECTS CLONES  COPIES  MISSING_ON_PRIMARY 
UNFOUND DEGRADED RD_OPS  RD  WR_OPS  WR
.rgw.root  1.09KiB   4   0  12  0   
00  128KiB   0  0B
data   29.4GiB7918   0   23754  0   
00 1414777 3.74TiB 3524833 4.54TiB
default.rgw.buckets.data0B   0   0   0  0   
0