On Apr 13, 2010, at 10:12 AM, shruti jain wrote:
> The project aims at developing a system which would use collaborative caching 
> techniques to improve the read accesses in OpenAFS. This project is based on 
> two observations.
> 
> Firstly, in a cluster environment, a large number of clients need same 
> datasets to work on i.e. the data on which client nodes need to execute is 
> same for many other nodes on the network. Currently, each client contacts the 
> server individually to fetch the data. This increase load on the server 
> unnecessarily. If the size of the file is very large then the problem would 
> be highly magnified.
> 
> Second observation is that the local bandwidth are mostly fast and runs into 
> Gbps. In a cluster, many clients would share the same geography and thus have 
> fast interconnects between them. The server might be connected through a slow 
> network link. In this situation, accessing data from another client would be 
> much faster than accessing data from server itself.
> 
Just one guy's opinion, but I like this. The more I think about it, the more I 
like it.

One long-term benefit of this is in clusters. We've seen a number of instances 
where folks have attempted to use AFS as if it were a clustered file system, 
keeping various lockfiles, etc in the fs. It beat the hell out of our AFS 
servers when it happened. If a fs could delegate responsibility for such 
lockfiles to a local node, it's make that sort of thing feasible and fast.

Steve

Reply via email to