Hi!

Do you by any chance have your OSDs placed at a local directory path rather than on a non utilized physical disk?

If I remember correctly from a similar setup that I had performed in the past the "ceph df" command accounts for the entire disk and not just for the OSD data directory. I am not sure if this still applies since it was on an early Firefly release but it is something that it's easy to look for.

I don't know if the above make sense but what I mean is that if for instance your OSD are at something like /var/lib/ceph/osd.X (or whatever) and this doesn't correspond to mounted a device (e.g. /dev/sdc1) but are local on the disk that provides the / or /var partition then you should do a "df -h" to see what the amount of data are on that partition and compare it with the "ceph df" output. It should be (more or less) the same.

Best,

George


2015-03-27 18:27 GMT+01:00 Gregory Farnum <g...@gregs42.com>:
Ceph has per-pg and per-OSD metadata overhead. You currently have 26000 PGs, suitable for use on a cluster of the order of 260 OSDs. You have placed
almost 7GB of data into it (21GB replicated) and have about 7GB of
additional overhead.

You might try putting a suitable amount of data into the cluster before
worrying about the ratio of space used to data stored. :)
-Greg

Hello Greg,

I put a suitable amount of data now, and it looks like my ratio is
still 1 to 5.
The folder:
/var/lib/ceph/osd/ceph-N/current/meta/
did not grow, so it looks like that is not the problem.

Do you have any hint how to troubleshoot this issue ???


ansible@zrh-srv-m-cph02:~$ ceph osd pool get .rgw.buckets size
size: 3
ansible@zrh-srv-m-cph02:~$ ceph osd pool get .rgw.buckets min_size
min_size: 2


ansible@zrh-srv-m-cph02:~$ ceph -w
    cluster 4179fcec-b336-41a1-a7fd-4a19a75420ea
     health HEALTH_WARN pool .rgw.buckets has too few pgs
     monmap e4: 4 mons at

{rml-srv-m-cph01=10.120.50.20:6789/0,rml-srv-m-cph02=10.120.50.21:6789/0,rml-srv-m-stk03=10.120.50.32:6789/0,zrh-srv-m-cph02=10.120.50.2:6789/0},
election epoch 668, quorum 0,1,2,3
zrh-srv-m-cph02,rml-srv-m-cph01,rml-srv-m-cph02,rml-srv-m-stk03
     osdmap e2170: 54 osds: 54 up, 54 in
      pgmap v619041: 28684 pgs, 15 pools, 109 GB data, 7358 kobjects
            518 GB used, 49756 GB / 50275 GB avail
               28684 active+clean

ansible@zrh-srv-m-cph02:~$ ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED
    50275G     49756G         518G          1.03
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 155 0 16461G 2 gianfranco 7 156 0 16461G 2 images 8 257M 0 16461G 38 .rgw.root 9 840 0 16461G 3 .rgw.control 10 0 0 16461G 8 .rgw 11 21334 0 16461G 108 .rgw.gc 12 0 0 16461G 32 .users.uid 13 1575 0 16461G 6 .users 14 72 0 16461G 6 .rgw.buckets.index 15 0 0 16461G 30 .users.swift 17 36 0 16461G 3 .rgw.buckets 18 108G 0.22 16461G 7534745 .intent-log 19 0 0 16461G 0 .rgw.buckets.extra 20 0 0 16461G 0 volumes 21 512M 0 16461G 161
ansible@zrh-srv-m-cph02:~$
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to