Adam Megacz wrote:
> Sorry to keep nagging you on this issue...
> 
> Robert Banz <[EMAIL PROTECTED]> writes:
>>> i wouldn't expect corruption issues here, in spite of the question
>>> of whether *performance* sucks because you're imposing another
>>> network round trip (minimum) in an already-network protocol
> 
>> No corruption problems (at least in a maildir-like environment), but
>> its mostly stuff caused by callback issues now.  As in too many of
>> them. ;)
> 
> Specifically, is it that the fileserver gets bogged down by having to
> keep track of too many outstanding callbacks?
> 
>   - a

The problem is resource contention.  If you have multiple servers that
are updating the contents of the same resource (file or directory) then
the callbacks will have to be revoked on the competing servers with each
change.  The end result is that the clients end up starving each other.

  A gets a callback and reads data

  B gets a callback and reads data

  C gets a callback and reads data

  A makes a change; callbacks are broken on B and C;
  data version is incremented

  B wants to make a change; gets callback; must read data;
  makes change; callback is broken on A; data version is incremented

  A wants to read data; gets callback; must reads data

  C wants to make a change; gets callback; must read data;
  makes change; callback is broken on A and B;
  data version is incremented

etc.  The more servers you add the worse the problem gets.

Jeffrey Altman
Secure Endpoints Inc.

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to