On 05/08/2010 09:40, Mattias Pantzare wrote:

Actually I believe there is another way which is less expensive. The
underlying problem is that once zfs sets a recordsize for a given file it
will stick to it forever. So if you create a new file and initially write
more than 128KB of data with a default recordsize of 128KB zfs will use a fs
blocksize of 128KB, even if file is truncated later on. However if you would
create a file and initially write only lets say 1KB it will choose a 1KB
recordsize and then stick to it regardless of how much data is being
written. But then it is easier for a sysadmin to just limit the recordsize
to 8kb (or 1kb, or whatever) I guess. Afsd could check recordsize during
startup and issue a warning with recommendation to lower it to a smaller
value.
No, that is not how it works. A file will stick to the recordsize that
was set for the filesystem when the file was created, regardless of
the size of inital writes.

A file smaller than recordsize will have a smaller record but that
will grow to recordsize when you write more data.


Yes, you are actually right. I've just double checked.

--
Robert Milkowski
http://milek.blogspot.com


_______________________________________________
OpenAFS-devel mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-devel

Reply via email to