IIRC these global values for total size and available are just summations
from the (programmatic equivalent) of running df on each machine locally,
but the used values are based on actual space used by each PG. That has
occasionally produced some odd results depending on how you've configured
your system and how that translates into df output. (Eg you might be using
up space for journals or your OS that aren't considered as used for the
purposes of RADOS' df.)
-Greg
On Wed, Feb 25, 2015 at 6:57 AM Kamil Kuramshin kamil.kurams...@tatar.ru
wrote:
Cant find out why this can happen:
Got an HEALTH_OK cluster. ceph version 0.87, all nodes are Debian Wheezy
with a stable kernel 3.2.65-1+deb7u1. ceph df shows me this:
*$ ceph df*
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
*242T 221T8519G 3.43 *
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 2 1948G 0.7974902G 498856
ec_backup-storage 4 0 0 146T 0
cache 5 0 0 184G 0
block-devices 6 827G 0.3374902G 211744
Explanation:
Total space = Used space + Available space:
*242T ** 8,5T + **221T*, but MUST be equal is not it? Where I have lost
aproxymately 12,5 Tb of space?
*$ ceph -s*
cluster 0745bec9-a7a7-4ee1-be5d-bb12db3cdd8f
health HEALTH_OK
monmap e1: 3 mons at {node04=
10.0.0.14:6789/0,node05=10.0.0.15:6789/0,node06=10.0.0.16:6789/0},
election epoch 48, quorum 0,1,2 node04,node05,node06
osdmap e16866: 102 osds: 102 up, 102 in
pgmap v570489: 10200 pgs, 4 pools, 2775 GB data, 693 kobjects
* 8518 GB used, 221 TB / 242 TB avail*
10200 active+clean
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com