On 4/7/08, Jeffrey Hutzelman <[EMAIL PROTECTED]> wrote: > --On Monday, April 07, 2008 12:30:06 PM -0400 Matt Benjamin > <[EMAIL PROTECTED]> wrote: > [snip] > > > As discussed in previous mail, it seems that there's a natural > > compression in batching notifications to one cache manager, especially > > to one file, grouped as Tom says, closely in time. I assumed we would > > wish to support this. > > > > You'd think that, but the problem is that you generally can't. Cache > consistency demands that when a file's contents are changed, you break > callbacks to any online clients before the RPC that made the change returns. > That means you can't queue them up to combine later. >
No. That entirely depends on the consistency model you're trying to support. What you suggest we do would be the equivalent of saying a microprocessor must wait for a store to hit main memory, and all caches to be invalidated, before the instruction can retire. Nobody in the hardware business follows that type of consistency model anymore (because it does not scale, and is unnecessary once atomics and membars are supported), and I don't think we should use it either. The point of supporting locks and synchronous descriptor modes is to provide a special means of (1) atomically updating data, and (2) supporting operations equivalent to membars for special cases that require strict consistency. It is inefficient to require all updates to follow such strict coherence rules. If some users feel they need strict consistency (e.g. to support broken applications which use "lock files"), it should be available as optionally supported behavior. However, the protocol itself should fully support loosely ordered, asynchronous consistency models as well. -Tom _______________________________________________ AFS3-standardization mailing list [email protected] http://michigan-openafs-lists.central.org/mailman/listinfo/afs3-standardization
