On 03/08/2010 23:28, Andrew Deason wrote:
On Tue, 03 Aug 2010 22:36:31 +0100
Robert Milkowski<[email protected]> wrote:
Hi,
Just by the way, these kinds of questions are more suited to
openafs-info than openafs-devel, I think.
I think you right. Sorry about that.
Can AFS cache be placed on any local filesystem like ZFS, VxFS or
UFS+logging?
ZFS and UFS have been used (with and without logging). I'm not sure if
anyone has tried to use VxFS, but in theory I think it should work as
our cache I/O mechanisms are supposed to be FS-agnostic.
Note that there is a known (unfixable) issue with ZFS caches that can
cause them to take up far more disk space than you have configured.
<http://www.openafs.org/pipermail/openafs-devel/2009-September/017033.html>
has some information. Decreasing the ZFS recordsize makes it not as bad,
though the issue is still always there.
This should't be a big issue. You can always set recordsize to something
smaller.
Well, one could even enable compression on zfs (depending on what data
is being cached).
From the afs point of view as someone else suggested instead of
truncating a file we would create a new one and unlink the old one.
Actually I believe there is another way which is less expensive. The
underlying problem is that once zfs sets a recordsize for a given file
it will stick to it forever. So if you create a new file and initially
write more than 128KB of data with a default recordsize of 128KB zfs
will use a fs blocksize of 128KB, even if file is truncated later on.
However if you would create a file and initially write only lets say 1KB
it will choose a 1KB recordsize and then stick to it regardless of how
much data is being written. But then it is easier for a sysadmin to just
limit the recordsize to 8kb (or 1kb, or whatever) I guess. Afsd could
check recordsize during startup and issue a warning with recommendation
to lower it to a smaller value.
You can still have a UFS cache partition easily on an otherwise-ZFS-y
system, though. Just make a zvol block device and create a UFS
filesystem on that.
I know about this workaround. It works but it is ugly... :)
And it doesn't necessarily solve the above issue unless you limit
recordsize but then you don't need ufs...
When I start afsd with a cache configured on UFS with logging it prints
a warning (see below) but otherwise seems to be working fine. It also
seems to be working fine over ZFS.
It looks like it the below code is just a left-over and needs to be
removed... ???
As far as I know, this warning is still valid. The original reason for
that came from this, I think:
<http://www.openafs.org/pipermail/openafs-devel/2002-November/008646.html>
and I don't recall anyone doing anything to try to fix or work around
it.
I need to investigate a little bit more to understand the issue better.
Correct me if I'm wrong but a dedicated partition or a filesystem is
fine, right?
--
Robert Milkowski
http://milek.blogspot.com
_______________________________________________
OpenAFS-devel mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-devel