Rainer Toebbicke wrote:
A little bit off-topic:
Admittedly I frowned myself for years at voices promoting the use of
the namei fileserver in favour of the inode one, until...
on the inode fileserver the inc(), dec() operations always translate
into real I/Os. You cannot do a lot such operations per second, at
least I did not find out how.
This means that cloning (and therefore moving, backing up, ...)
volumes is intrinsically slow. With the namei fileserver the speedup
is tremendous once you group the intervening fsyncs(). This is
relevant for operating a service where people think in volumes of
several hundred thousand files. I am still frowning at the otherwise
unproductive overhead the namei fileserver induces at every operation.
Hence only our 1.2.X Solaris servers are still running the inode
fileserver, the 1.4.X were switched to namei with up to now (touch
wood) no ill effects.
Now, strictly speaking this is an argument in favour of the link count
file that the namei fileserver uses instead of using the inode
reference counts. I could well imagine the same technique in the inode
fileserver.
We've been running namei on Solaris (a mix of 8, 9, and 10) here at UMBC
for more than a year, with all AFS servers recently moved to namei on
Solaris 10 (x86) with seemingly no ill effects directly associated with
namei.Great performance (well, these /are/ dual Opteron servers), we can
use UFS logging, and best of all, I got it working on Solaris Nevada
with ZFS (well, I created and served a volume on that server with no
problems at least.... just had to force-attach fileserver to the
ZFS-based /vicepblah mounts.. a proof of concept but warrants further
testing.)
You guys (this is directed at the core AFS developers) don't expect
Solaris's UFS with AFS/inode to be the end-all for ever and ever, do
you? With production ZFS coming online in a Solaris 10 update as early
as June (s10u2), and with the features it has, you betcha more people
will be looking at using AFS/namei and stop using the direct-fs-munging
dinosaur that is AFS/inode. Perhaps this means that the storage
subsystem of AFS needs some TLC?
/dale
_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info