Hi all-
I am testing on Ceph 0.78 running on Ubuntu 13.04 with 3.13 kernel. I had two
replication pools and five erasure code pools. Cluster was getting full so I
deleted all the EC pools. However, Ceph is not freeing the capacity. Note
below there is only 1636G in the two pools but the global stats still report
13652G as used (90.5% when it should be down to 10.8%).
Any ideas why Ceph is not freeing the capacity and how to fix it? Thanks!
ceph@joceph-admin01:~$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
15083G 1430G 13652G 90.51
POOLS:
NAME ID USED %USED OBJECTS
data 0 0 0 0
metadata 1 0 0 0
rbd 2 0 0 0
mycontainers_1 20 839G 5.56 224105
mycontainers_2 21 797G 5.29 213515
ceph@joceph-admin01:~$ ceph status
cluster b12ebb71-e4a6-41fa-8246-71cbfa09fb6e
health HEALTH_WARN 18 near full osd(s)
monmap e1: 2 mons at
{mohonpeak01=10.0.0.101:6789/0,mohonpeak02=10.0.0.102:6789/0}, election epoch
10, quorum 0,1 mohonpeak01,mohonpeak02
osdmap e214: 18 osds: 18 up, 18 in
pgmap v198720: 2784 pgs, 10 pools, 1637 GB data, 427 kobjects
13652 GB used, 1430 GB / 15083 GB avail
2784 active+clean
ceph@joceph-admin01:~$ ceph osd dump
epoch 214
fsid b12ebb71-e4a6-41fa-8246-71cbfa09fb6e
created 2014-03-24 12:06:28.290970
modified 2014-04-03 10:18:07.714158
flags
pool 0 'data' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins
pg_num 128 pgp_num 128 last_change 84 owner 0 flags hashpspool
crash_replay_interval 45 stripe_width 0
pool 1 'metadata' replicated size 1 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 128 pgp_num 128 last_change 86 owner 0 flags hashpspool
stripe_width 0
pool 2 'rbd' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins
pg_num 128 pgp_num 128 last_change 88 owner 0 flags hashpspool stripe_width 0
pool 20 'mycontainers_1' replicated size 1 min_size 1 crush_ruleset 0
object_hash rjenkins pg_num 1600 pgp_num 1600 last_change 167 owner 0 flags
hashpspool stripe_width 0
pool 21 'mycontainers_2' replicated size 2 min_size 1 crush_ruleset 0
object_hash rjenkins pg_num 800 pgp_num 800 last_change 171 owner 0 flags
hashpspool stripe_width 0
max_osd 18
osd.0 up in weight 1 up_from 195 up_thru 201 down_at 194 last_clean_interval
[116,185) 10.0.0.101:6815/5203 10.0.1.101:6810/5203 10.0.1.101:6811/5203
10.0.0.101:6816/5203 exists,up 56431fdc-88a2-4c55-a4a5-64596f080962
osd.1 up in weight 1 up_from 196 up_thru 203 down_at 195 last_clean_interval
[117,185) 10.0.0.101:6800/5125 10.0.1.101:6800/5125 10.0.1.101:6801/5125
10.0.0.101:6801/5125 exists,up 266f2705-6286-4a1d-82ba-cb5e1fb56e46
osd.2 up in weight 1 up_from 193 up_thru 201 down_at 192 last_clean_interval
[117,185) 10.0.0.101:6821/5245 10.0.1.101:6814/5245 10.0.1.101:6815/5245
10.0.0.101:6822/5245 exists,up e363143f-805e-4e4c-8732-fd9f07d7cf31
osd.3 up in weight 1 up_from 195 up_thru 201 down_at 194 last_clean_interval
[116,185) 10.0.0.101:6803/5138 10.0.1.101:6802/5138 10.0.1.101:6803/5138
10.0.0.101:6804/5138 exists,up 444df45b-de9f-42a4-92d6-82b479be0a01
osd.4 up in weight 1 up_from 200 up_thru 201 down_at 199 last_clean_interval
[116,185) 10.0.0.101:6806/5143 10.0.1.101:6804/5143 10.0.1.101:6805/5143
10.0.0.101:6807/5143 exists,up 47e19cd1-0ca2-4205-ba4d-7e726683097a
osd.5 up in weight 1 up_from 197 up_thru 203 down_at 196 last_clean_interval
[117,185) 10.0.0.101:6824/5370 10.0.1.101:6816/5370 10.0.1.101:6817/5370
10.0.0.101:6825/5370 exists,up a3aed3a0-da30-4483-ae40-eebf8d4b0fc9
osd.6 up in weight 1 up_from 194 up_thru 201 down_at 193 last_clean_interval
[116,185) 10.0.0.101:6818/5226 10.0.1.101:6812/5226 10.0.1.101:6813/5226
10.0.0.101:6819/5226 exists,up 5ba235c2-c304-4a82-80aa-9a9934367347
osd.7 up in weight 1 up_from 196 up_thru 202 down_at 195 last_clean_interval
[117,185) 10.0.0.101:6809/5160 10.0.1.101:6806/5160 10.0.1.101:6807/5160
10.0.0.101:6810/5160 exists,up db75e990-7a6e-4fff-9d13-ab50a7139821
osd.8 up in weight 1 up_from 200 up_thru 201 down_at 199 last_clean_interval
[116,185) 10.0.0.101:6812/5184 10.0.1.101:6808/5184 10.0.1.101:6809/5184
10.0.0.101:6813/5184 exists,up b134b786-6ee1-4b5b-bead-2885b3bc75c4
osd.9 up in weight 1 up_from 201 up_thru 201 down_at 200 last_clean_interval
[116,185) 10.0.0.102:6800/2750 10.0.1.102:6800/2750 10.0.1.102:6801/2750
10.0.0.102:6801/2750 exists,up d1a0d7f9-3c74-484f-a0f3-9adf156bf627
osd.10 up in weight 1 up_from 194 up_thru 201 down_at 193
last_clean_interval [117,185) 10.0.0.102:6821/3000 10.0.1.102:6814/3000
10.0.1.102:6815/3000 10.0.0.102:6822/3000 exists,up
8f478fa7-c1ae-416d-838f-af11722b8223
osd.11 up in weight 1 up_from 190 up_thru 201 down_at 189
last_clean_interval [116,185) 10.0.0.102:6812/2839 10.0.1.102:6808/2839
10.0.1.102:6809/2839 10.0.0.102:6813/2839 exists,up
e298c892-28d9-4abc-9b23-21dbf513b893
osd.12 up in weight 1 up_from 190 up_thru 201 down_at 189
last_clean_interval [116,185) 10.0.0.102:6806/2777 10.0.1.102:6804/2777
10.0.1.102:6805/2777 10.0.0.102:6807/2777 exists,up
6a1fd468-a70a-4885-b824-6215348d813e
osd.13 up in weight 1 up_from 189 up_thru 201 down_at 188
last_clean_interval [116,185) 10.0.0.102:6815/2889 10.0.1.102:6810/2889
10.0.1.102:6811/2889 10.0.0.102:6816/2889 exists,up
fcb52765-9333-459f-8143-90f0513f67d0
osd.14 up in weight 1 up_from 198 up_thru 203 down_at 197
last_clean_interval [116,185) 10.0.0.102:6824/3306 10.0.1.102:6816/3306
10.0.1.102:6817/3306 10.0.0.102:6825/3306 exists,up
e098fb1c-c127-4017-9e63-46cd3f11fcff
osd.15 up in weight 1 up_from 190 up_thru 201 down_at 189
last_clean_interval [116,185) 10.0.0.102:6809/2819 10.0.1.102:6806/2819
10.0.1.102:6807/2819 10.0.0.102:6810/2819 exists,up
59f916d3-00ac-4493-aa72-c7d4bce39d75
osd.16 up in weight 1 up_from 187 up_thru 201 down_at 186
last_clean_interval [116,185) 10.0.0.102:6803/2753 10.0.1.102:6802/2753
10.0.1.102:6803/2753 10.0.0.102:6804/2753 exists,up
0e6d9dec-baeb-46e6-9c1d-8fed1c50bc58
osd.17 up in weight 1 up_from 191 up_thru 202 down_at 190
last_clean_interval [116,185) 10.0.0.102:6818/2950 10.0.1.102:6812/2950
10.0.1.102:6813/2950 10.0.0.102:6819/2950 exists,up
520b86b0-93de-4da8-a25f-a3b8fc15321a
ceph@joceph-admin01:~$
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com