On Wed Oct 27 05:12 1999 +1000, Darren Reed wrote:
 > In some email I received from Mark D. Roth, sie wrote:
 > [...]
 > > I agree that best-effort is good enough for the default, especially if
 > > TCP is used for the new protocol.  However, if we're going to provide
 > > a guarunteed/verifyable delivery option, we need to ensure that the
 > > data will not be lost when that option is selected.  This means that
 > > we need to fsync() before sending an ACK.
 > 
 > And that implies some sort of windowing system - fsync() is a huge
 > problem in so far as DoS attacks and large number of log entries is
 > concerned.

Right.  I was thinking along the lines of each message having a
sequence number and having the sender and receiver negotiate on the
window size.

 > Next, who's master of how big the difference between received and ACK'd
 > messages ?  If I buffer ~1000 messages before calling fsync(), locally,
 > do I want to have some remote daemon telling me I must commit after every
 > message, making the buffer 1 and killing my performance ?

You're right, this is an obvious problem.  Since only local processes
will require a commit after every message, the daemon only needs to
allow that small of an ACK interval from local log sources (loopback
TCP interface or new /dev/log device).

 > I think that is something which the server for the connection needs to
 > inform the client about and to which the client can but agree or close.

Agreed.  Furthermore, the server should allow the administrator to
configure what it's willing to do.

-- 
Mark D. Roth <[EMAIL PROTECTED]>
http://www.feep.net/~roth/

Reply via email to