Hi All, I'm new to AFS, so I'm looking primarily for advice / direction / tales of woe.
Background: I'm hoping to use AFS to improve read-times from a CVS repository. We currently have the repository set up at one location and access it through a server via the CVS pserver (it's own TCP/IP client-server stuff). We have a bunch of sites worldwide all speaking directly to the one server. I'm hoping to set up additional servers at the remote locations to improve local access times to the CVS data. At first blush, it seems like the simplest solution (it terms of changing what we do) would be to have the CVS repository data live in an AFS directory and have all CVS servers access it with the same file path. So, all the CVS servers except one would be AFS clients. The CVS repository data would be in the AFS cache of each CVS server. A write to one server would invalidate the cached copies of all CVS servers, but read operations that hit the cache would be entirely local. Questions: 1. Anyone out there tried something like this? If so, what works or doesn't? 2. Is there a concurrency race condition lurking in here? CVS manages concurrency by creating a lock directory. Is this operation synchronous on AFS such that a successful mkdir at any AFS client will guarantee no other AFS client could create the same directory. Thanks for reading this far. -- Emil _______________________________________________ OpenAFS-info mailing list [EMAIL PROTECTED] https://lists.openafs.org/mailman/listinfo/openafs-info
