Dear All,
Sorry is this has been covered before, but is it possible to configure
cephfs to report free space based on what is available in the main
storage tier?
My "df" shows 76%, this gives a false sense of security, when the EC
tier is 93% full...
i.e. # df -h /ceph
Filesystem Size Used Avail Use% Mounted on
ceph-fuse 440T 333T 108T 76% /ceph
# ls -lhd /ceph
drwxr-xr-x 1 root root 254T Jun 27 17:03 /ceph
but "ceph df" shows that our EC pool is %92.46 full.
# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
439T 107T 332T 75.57
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 450G 0
ecpool 1 255T 92.46 21334G 105148577
hotpool 2 818G 64.53 450G 236023
metapool 3 274M 0.06 450G 2583306
Other info:
We are using Luminous 12.1.0, with a small NVMe replicated pool, pouring
data into a large erasure coded pool. OS is SL 7.3.
Snapshots are enabled, and taken hourly, but very little data has been
deleted from the system.
Hardware:
Six nodes, each have 10 x 8TB osd, ec (4+1)
Two nodes, each with 2 x 800GB NVMe (2x for metadata+top tier)
any thoughts appreciated...
Jake
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com