Hi, I'm currently (slowly) deploying OpenAFS on our server farm (for the time being, my hypothesis is that the crashes I reported last week were due to a broken libafs build - I installed the same kernel version on all machines and made sure that OpenAFS was compiled against that, no problems so far). We have a sort of luxury problem in that all our webservers (AFS clients) have 36Gig SCSI drives, which of course aren't really used when everything but the base Linux install comes from the file server. I'm planning to move these drivers to the fileservers later on (when they fill up) and replace them with 9Gig models, but for the time being that's the size I'm working with on the clients.
At the moment, because I run reiserfs root filesystems and don't feel like repartitioning until all lights for OpenAFS are green, I have put a 512mb cache in a loopback-mounted ext2 filesystem. That works fine, but a question for best performance: is it useful to have a, say, 18Gb client cache? If so, do the default cache parameters suffice at that size or is some adjustment necessary? Website data and executables are relatively static, so I am hoping to off-load the servers by employing big client caches to an extent that they can continue to double as database servers while the size of the AFS-provided filesystem grows and grows... -- Cees de Groot http://www.cdegroot.com <[EMAIL PROTECTED]> GnuPG 1024D/E0989E8B 0016 F679 F38D 5946 4ECD 1986 F303 937F E098 9E8B _______________________________________________ OpenAFS-info mailing list [EMAIL PROTECTED] https://lists.openafs.org/mailman/listinfo/openafs-info
