On Feb 20, 2007, at 15:05, Krister Johansen wrote:

what's the minimum allocation size for a file in zfs?  I get 1024B by
my calculation (1 x 512B block allocation (minimum) + 1 x 512B inode/
znode allocation) since we never pack file data in the inode/znode.
Is this a problem?  Only if you're trying to pack a lot files small
byte files in a limited amount of space, or if you're concerned about
trying to access many small files quickly.

This is configurable on a per-dataset basis.  The look in zfs(1m) for
recordsize.

the minimum is still 512B .. (try creating a bunch of 10B files - they show
up as ZFS plain files each with a 512B data block in zdb)

VxFS has a 96B "immediate area" for file, symlink, or directory data;
NTFS can store small files in the MFT record; NetApp WAFL can also
store small files in the 4KB inode (16 Block pointers = 128B?) .. if
you look at some of the more recent OSD papers and some of the Lustre/
BlueArc work you'll see that this topic comes into play for
performance in pre-fetching file data and locality issues for
optimizing heavy access of many small files.

ZFS has something similar.  It's called a bonus buffer.

i see .. but currently we're only storing symbolic links there since given the bufsize of 320B - the znode_phys struct of 264B, we've only got 56B left for data in the 512B dnode_phys struct .. i'm thinking we might want to trade off some of the uint64_t meta attributes with something smaller and maybe eat into the pad to get a bigger data buffer .. of course that will also affect
the reporting end of things, but should be easily fixable.

just my 2p
---
.je
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to