> > kernel) I cannot write a file larger than approximately 2GB in size > > to my AFS volumes, even from the fileserver itself. The release notes > Build it from source and use --enable-largefile-fileserver
This is odd, I have 1.3.81 and I'm quite able to write >2 GiB files on the
AFS volume. I do not seem to be able to read them, though. Any process
trying to access the over 2GB-parts of the files hangs for ever. It cannot
even be killed (SIGKILL). Which one is at fault here, server or client?
(Everything runs on linux/XFS, except the client cache, which is on ext2.)
I also have one 1.4.0 -server. What happens if I put the large file on
1.4.0 and try to access it from 1.3.81 clients? What if I replicate the
volume to 1.3.81 fileservers? Shuold I force all fileservers to be of the
same version?
Cheers,
Juha
--
-----------------------------------------------
| Juha Jäykkä, [EMAIL PROTECTED] |
| Laboratory of Theoretical Physics |
| Department of Physics, University of Turku |
| home: http://www.utu.fi/~juolja/ |
-----------------------------------------------
pgpJpzWvyh6Oj.pgp
Description: PGP signature
