On Tue, Mar 19, 2019 at 2:13 PM Erwin Bogaard <erwin.boga...@gmail.com>
wrote:

> Hi,
>
>
>
> For a number of application we use, there is a lot of file duplication.
> This wastes precious storage space, which I would like to avoid.
>
> When using a local disk, I can use a hard link to let all duplicate files
> point to the same inode (use “rdfind”, for example).
>
>
>
> As there isn’t any deduplication in Ceph(FS) I’m wondering if I can use
> hard links on CephFS in the same way as I use for ‘regular’ file systems
> like ext4 and xfs.
>
> 1. Is it advisible to use hard links on CephFS? (It isn’t in the ‘best
> practices’: http://docs.ceph.com/docs/master/cephfs/app-best-practices/)
>

This should be okay now. Hard links have changed a few times so Zheng can
correct me if I've gotten something wrong, but the differences between
regular files from a user/performance perspective are:
* if you take snapshots and have hard links, hard-linked files are special
and will be a member of *every* snapshot in the system (which only matters
if you actually write to them during all those snapshots)
* opening a hard-linked file may behave as if you were doing two file opens
instead of one, from a performance perspective. But this might have
changed? (In the past, you would need to look up the file name you open,
and then do another lookup on the authoritative location of the file.)


> 2. Is there any performance (dis)advantage?
>

Generally not once the file is open.

3. When using hard links, is there an actual space savings, or is there
> some trickery happening?
>

If you create a hard link, there is a single copy of the file data in RADOS
that all the file names refer to. I think that's what you're asking?


> 4. Are there any issues (other than the regular hard link ‘gotcha’s’) I
> need to keep in mind combining hard links with CephFS?
>

Not other than above.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to