Hi,

On 10/08/2015 09:14 PM, John Spray wrote:
On Thu, Oct 8, 2015 at 7:23 PM, Gregory Farnum <gfar...@redhat.com> wrote:
On Thu, Oct 8, 2015 at 6:29 AM, Burkhard Linke
<burkhard.li...@computational.bio.uni-giessen.de> wrote:
Hammer 0.94.3 does not support a 'dump cache' mds command.
'dump_ops_in_flight' does not list any pending operations. Is there any
other way to access the cache?
"dumpcache", it looks like. You can get all the supported commands
with "help" and look for things of interest or alternative phrasings.
:)
To head off any confusion for someone trying to just replace dump
cache with dumpcache: "dump cache" is the new (post hammer,
apparently) admin socket command, dumpcache is the old tell command.
So it's "ceph mds tell <mds.whoever> dumpcache <filename>".
Thanks, that did the trick. I was able to locate the host blocking the file handles and remove the objects from the EC pool.

Well, all except one:

# ceph df
  ...
    ec_ssd_cache             18      4216k         0 2500G          129
    cephfs_ec_data           19      4096k         0 31574G            1

# rados -p ec_ssd_cache ls
10000ef540f.00000386
# rados -p cephfs_ec_data ls
10000ef540f.00000386
# ceph mds tell cb-dell-pe620r dumpcache cache.file
# grep 10000ef540f /cache.file
#

It does not show up in the dumped cache file, but keeps being promoted to the cache tier after MDS restarts. I've restarted most of the cephfs clients by unmounting cephfs and restarting ceph-fuse, but the object remains active.

Regards,
Burkhard
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to