Ceph users,
Got an old Hammer CephFS installed on old debian wheezy (7.11) boxes (I know :)
root@node4:~# dpkg -l | grep -i ceph
ii ceph 0.94.9-1~bpo70+1 amd64
distributed storage and file system
ii ceph-common 0.94.9-1~bpo70+1 amd64
common utilities to mount and interact with a ceph storage cluster
ii ceph-deploy 1.5.35 all
Ceph-deploy is an easy to use configuration tool
ii ceph-fs-common 0.94.9-1~bpo70+1 amd64
common utilities to mount and interact with a ceph file system
ii ceph-fuse 0.94.9-1~bpo70+1 amd64
FUSE-based client for the Ceph distributed file system
ii ceph-mds 0.94.9-1~bpo70+1 amd64
metadata server for the ceph distributed file system
ii libcephfs1 0.94.9-1~bpo70+1 amd64
Ceph distributed file system client library
ii libcurl3-gnutls:amd64 7.29.0-1~bpo70+1.ceph amd64
easy-to-use client-side URL transfer library (GnuTLS flavour)
ii libleveldb1:amd64 1.12.0-1~bpo70+1.ceph amd64
fast key-value storage library
ii python-ceph 0.94.9-1~bpo70+1 amd64
Meta-package for python libraries for the Ceph libraries
ii python-cephfs 0.94.9-1~bpo70+1 amd64
Python libraries for the Ceph libcephfs library
ii python-rados 0.94.9-1~bpo70+1 amd64
Python libraries for the Ceph librados library
ii python-rbd 0.94.9-1~bpo70+1 amd64
Python libraries for the Ceph librbd library
Currently I fail to patch it to 0.94.10 with apt-get, I get:
Get:13 http://ceph.com wheezy Release
Err http://ceph.com wheezy Release
W: A error occurred during the signature verification. The repository is not
updated and the previous index files will be used. GPG error: http://ceph.com
wheezy Release: The following signatures were invalid: NODATA 1 NODATA 2
W: Failed to fetch http://ceph.com/debian-hammer/dists/wheezy/Release
<http://ceph.com/debian-hammer/dists/wheezy/Release>
root@node4:~# apt-key list
/etc/apt/trusted.gpg
--------------------
...
pub 4096R/17ED316D 2012-05-20
uid Ceph Release Key <[email protected]>
pub 4096R/F2AE6AB9 2013-04-29
uid Joe Healy <[email protected]>
sub 4096R/91EE136C 2013-04-29
pub 1024D/03C3951A 2011-02-08
uid Ceph automated package build (Ceph automated package
build) <[email protected]>
sub 4096g/2E457B51 2011-02-08
pub 4096R/460F3994 2015-09-15
uid Ceph.com (release key) <[email protected]>
...
How to fix this?
Also we use a cephFS fuse mounted to store RBD backup dumps every weekend, only
it seems to leak space after each backup run:
root@node4:~# df -h /var/lib/ceph/backup/
Filesystem Size Used Avail Use% Mounted on
ceph-fuse 4.8T 3.1T 1.8T 64% /var/lib/ceph/backup
root@node4:~# rados df
pool name KB objects clones degraded
unfound rd rd KB wr wr KB
cephfs_data 639810067 156482 0 0 0
3739243 461786202 1802652 3850537128
cephfs_metadata 43222 34 0 0 0
348 165602 70546 168253
vmimages 978905302 239804 0 0 0
537951546 35216013843 987828425 63368630346
total used 3244203364 396320
total avail 1862738112
total space 5106941476
root@node4:~# crontab -l
# track Ceph Pool/FS usage, assuimg leak space in cephFS data pool
0 0 * * * (echo `date`; /usr/bin/rados df) >> /var/tmp/ceph_pool_usage.log
# rados df output for last week (backup dump in last two):
root@node4:~# grep cephfs_data /var/tmp/ceph_pool_usage.log
cephfs_data 606742032 148395 0 0 0
3067956 378470922 1786217 3800881920
cephfs_data 606742032 148395 0 0 0
3068249 378507338 1786217 3800881920
cephfs_data 606742032 148395 0 0 0
3068249 378507338 1786217 3800881920
cephfs_data 606742032 148395 0 0 0
3068249 378507338 1786217 3800881920
cephfs_data 606742032 148395 0 0 0
3068249 378507338 1786217 3800881920
cephfs_data 630184822 154126 0 0 0
3068257 378507346 1797061 3834535366
cephfs_data 639810067 156482 0 0 0
3068263 378507352 1802652 3850537128
Don’t understand why cephfs_data pool seems to grow and be so large (+600GB)
when fs only holds +100GB:
root@node4:~# du -sh /var/lib/ceph/backup
107G /var/lib/ceph/backup
Is it a bug in Hammer CephFS or do we need to run some sort of fstrim or like
to reclaim some pool space?
TIA
/Steffen_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com