Hi,

On 1/18/19 3:11 PM, [email protected] wrote:
Hi.

We have the intention of using CephFS for some of our shares, which we'd
like to spool to tape as a part normal backup schedule. CephFS works nice
for large files but for "small" .. < 0.1MB  .. there seem to be a
"overhead" on 20-40ms per file. I tested like this:

root@abe:/nfs/home/jk# time cat /ceph/cluster/rsyncbackups/13kbfile >
/dev/null

real    0m0.034s
user    0m0.001s
sys     0m0.000s

And from local page-cache right after.
root@abe:/nfs/home/jk# time cat /ceph/cluster/rsyncbackups/13kbfile >
/dev/null

real    0m0.002s
user    0m0.002s
sys     0m0.000s

Giving a ~20ms overhead in a single file.

This is about x3 higher than on our local filesystems (xfs) based on
same spindles.

CephFS metadata is on SSD - everything else on big-slow HDD's (in both
cases).

Is this what everyone else see?


Each file access on client side requires the acquisition of a corresponding locking entity ('file capability') from the MDS. This adds an extra network round trip to the MDS. In the worst case the MDS needs to request a capability release from another client which still holds the cap (e.g. file is still in page cache), adding another extra network round trip.


CephFS is not NFS, and has a strong consistency model. This comes at a price.


Regards,

Burkhard


_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to