Hello,

I would like to use a cephfs snapshot as a read/write volume without having to clone it first as the cloning operation is - if I'm not mistaken - still inefficient as of now. This is for a data restore use case with Moodle application needing a writable data directory to start.

The idea that came to mind was to use overlayFS with cephfs set up as a read-only lower layer and a writable local directory set up as an upper layer. With this set up, any modifications to the read-only .snap/testsnap directory would normally go to the upper directory making the snapshot directory somehow writable to the Moodle application. While this works fine when a local read-only filesystem is set up as the lower layer, it fails when cephfs is set up as the lower layer. Any modifications to the .snap/testsnap tree in the /cephfs-snap directory fails with an "Operation not supported".

$ mkdir /cephfs /upperdir /workdir /cephfs-snap

$ mount -t ceph 100.74.191.129:/volumes/group1/subvolume1/ /cephfs -o name=admin,secretfile=/etc/ceph/admin.secret

$ mount -t overlay overlay -o redirect_dir=on,lowerdir=/cephfs/.snap/testsnap,upperdir=/upperdir,workdir=/workdir /cephfs-snap

$ ls /cephfs-snap
usr

$ touch /cephfs-snap/foo.txt            <---- writing outside the lowerdir succeeds

$ ls /cephfs-snap
foo.txt  usr

$ ls /usr/etc

$ touch /cephfs-snap/usr/etc/foo            <---- writing inside the lowerdir fails touch: impossible de faire un touch « /cephfs-snap/usr/etc/foo »: Opération non supportée

I tried to mount the whole cephfs tree read-only (-o ro), tried to disable ACLs (-o noacl) as seen here [1] but of no help. Mounting with ceph-fuse didn't help either. There's been a recent discussion about this here [2] between Greg and Robert but with no real solution.

Did someone manage to do this?

Regards,

Frédéric.

[1] https://blog.fai-project.org/posts/overlayfs/
[2] https://tracker.ceph.com/issues/44821
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to