Hi,

We here at Social Science Research Computing provide a lot of very
large data files to our clients.  Some time ago (long before my time),
it was decided that these would swamp the local AFS caches, and so we
started providing datasets over NFS instead.

These days, disk is cheap, and we may conceivably be able to have very
large AFS caches in order to accomodate these files.  We also are
planning on moving user home directories from NFS to AFS, and these
too sometimes have very large files.

A quick unscientific survey shows user data files maxing out in the
100-200 MB range.  Datasets go up to around 400 MB.

Is it practical to have very large AFS caches - maybe several Gigs?
My guess is that a very large cache would get around the problem of
large data files swamping caches which are also used for program
binaries.  Does anyone know what kind of effect very large caches
would have on cache performance?

Any other advice as we think about these moves?

Thanks in advance,

Dave
 ____________________________________________________________
| Dave Lorand, System Administrator | [EMAIL PROTECTED] |
| Social Science Research Computing | 773-702-3792           |
| University of Chicago             | 773-702-2101 (fax)     |
+-----------------------------------+------------------------+
   PGP key: http://www.src.uchicago.edu/users/davel/key.txt

Reply via email to