> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Taps
> 
> # zfs create mytest/peter
> 
> where mytest is a zpool filesystem.
> 
> When does it make sense to create such a filesystem versus just
> creating a directory?

This is a thorny bush, with flowers and fruit hidden in it.  ;-)

For most people, in most cases, it doesn't make sense to make subdirs be
separate zfs filesystems, just because you have to know what you're doing in
order to avoid unexpected gotchas.  But it's common in some situations, like
when you're hosting a lot of different peoples' home dirs and stuff.

If you have nested zfs filesystems, you can use "df" instead of "du" to
check how much space is used by that thing.  Almost instant result, instead
of walking the tree.

With nesting, your snapshots are managed separately.  This could be a pro or
a con for you.

With nesting, you cannot access a subdir's snapshots by accessing the
snapshots of the parent.  The snapshots of a given file exist *only* in the
root of that filesystem, under the ".zfs" hidden directory.

When you "zfs send" you cannot exclude subdirectories.  But a nested
filesystem isn't a subdirectory, and therefore isn't included in the
parent's zfs send.

If you export a filesystem via "share" I believe its nested children
filesystems are not shared.  But if you export using the "sharenfs" property
and allow property inheritance, then the children are exported too.  If you
have a solaris nfs client, it somehow automatically knows about nested
shares and it's able to access the nfs nested filesystems via nfs, but if
you have linux ... it doesn't know.  So if you want access to the nested nfs
mounts via linux client, you have to configure them.  (So I believe.)

What else ...  Basically, all the properties and operations that you can do
on zfs filesystems, you can suddenly do with finer granularity, by using
nested filesystems.  That's a pro.  But generally speaking, you suddenly
*have to* do with finer granularity by using nested filesystems.  That's a
con.

If you have a large number of nested filesystems, your boot time can be very
long.  I think the filesystem mount time scales linearly with the number of
filesystems ... I am guessing from the top of my head ... 0.5 seconds per
filesystem.

If you have one big filesystem, "mv /tank/somedir/somefile
/tank/someotherdir/somefile" is instant.  If you have separate filesystems,
"mv /tank/somefilesystem/somefile /tank/anotherfs/somefile" will need to
read and write all the bytes of the file.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to