Hello,

I am evaluating AFS (currently in the reading stage) and have a couple of questions regarding the applicability of AFS to environments with large-ish files (some files in our workflows are ca. 250 GB) that I am hoping the community may be able to help with...

First, does AFS impose any limits on the size of the client disk cache? I am most concerned about x86_64 Linux clients, where I don't see any mention of limits in the man pages. I did find mention of limits on some (older?) references the Windows client on the web, but it was not clear if they were AFS imposed limits or just local policy. I'd be looking at a 1 to 2 TB cache on Linux clients -- is that supported/consistent with best practices?

Second, is there any description of chunking behavior that I can read up on? The asfd man page states that "AFS uses partial file transfer", but it is not clear in which direction(s?), and if this only the RPC level. If I run the UNIX head command on a 250GB file, will the whole file be transferred to the client's cache (this would take multiple minutes in our environment) or just the first chunk? If the whole file is transferred, would the head command be able to return the first ten lines after the first chunk arrived on the client's cache, or will the read block until the whole file is transferred?

Thanks in advance for your help and consideration.
--Jason

_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info

Reply via email to