Robert Banz wrote:
AFS can't really cause "san issues" in that it's just another
application using your filesystem. In some cases, it can be quite a
heavy user of such, but since its only interacting through the fs, its
not going to know anything about your underlying storage fabric, or
have any way of targeting it for any more badness than any other
filesystem user.
One of the big differences that would effect the filesystem IO load
that occurred between 1.4.1 & 1.4.6 was the removal functions that
made copious fsync operations. These operations were called in
fileserver/volserver functions that modified various in-volume
structures, specifically file creations and deletions, and would lead
to rather underwhelming performance when doing vos restores, deleting,
or copying large file trees. In many configurations, this causes the
OS to pass on a call to the underlying storage to verify that all
changes written have been written to *disk*, causing the storage
controller to flush its write cache. Since this defeats many of the
benefits (wrt I/O scheduling) on your storage hardware of having a
cache, this could lead to overloaded storage.
Some storage devices have the option to ignore these calls from
devices, assuming your write cache is reliable.
Under UFS, I would suggest that you'd be running in 'logging' mode
when using the namei fileserver on Solaris, as yes, fsck is rather
horrible to run. Performance on reasonably recent versions of ZFS
were quite acceptable as well.
I can confirm Robert's observations. I recently tested openafs 1.4.1
inode vs 1.4.6 namei on solaris 9 sparc with a Sun Storedge 3511
Expansion tray fibre channel device. The difference is stagerring with
vos move and such. We have been using the 1.4.6 namei config on a SAN
for a few months now with no issues.
Jason
_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info