I have a directory I’ve been trying to remove from cephfs (via cephfs-hadoop), 
the directory is a few hundred gigabytes in size and contains a few million 
files, but not in a single sub directory. I startd the delete yesterday at 
around 6:30 EST, and it’s still progressing. I can see from (ceph osd df) that 
the overall data usage on my cluster is decreasing, but at the rate its going 
it will be a month before the entire sub directory is gone. Is a recursive 
delete of a directory known to be a slow operation in CephFS or have I hit upon 
some bad configuration? What steps can I take to better debug this scenario?

ceph-users mailing list

Reply via email to