James Carlson wrote:
> I don't see how you can rule those out.  And the TCP RST scenario
> (causing intentional buffer flush) seems particularly devastating.

Is it solaris specific behavior to flush input buffers on RST? I'm not sure but I think linux don't do it.

I don't care about client faults (such as segmentation faults) because people got used to loose some data when they see an application crash. They are annoyed when it seems that everything is working ok, but data is lost. That's why I assume that acked data will be delivered. You've worried me with this RST case but are you sure that input buffers are flushed? I can understand that output buffers are no longer needed, but input buffers can contain usable data.

The last data packet can't be "lost." It will be resent.

I have bad experience with firewall that dropped connections after 12 seconds of half-closed state. Connection beetwen firewall and client was crowded so last data packet with FIN flag was not received by client. Server recieved RST in response to retransmition, but unfortunately client did not, because it didn't sent any packet to server - it waits for data. So after that server closed connection - because it got RST on FIN retransmition, but client hanged (timed out) waiting for data.

So:
 - I can't use lingering because I'll hang in close (single IO thread)
- I can't use shutdown because bad handling of half-closed connections by firewalls and client timeouts
 - I can use TINCOUTQ to see if all data are acked

what would you choose ?
In this case nonblocking close would be the best (it could return -1 and errno=EAGAIN ) but would it be more portable ?

Does TIOCOUTQ on a socket work anywhere _but_ Linux?  I think you've
created a requirement that essentially rules out any sort of
portability.

I'm not saying that it is portable. It is just nice feature that helps in some situations. On the other hand there is BrandZ community - if they wan't to run all userspace linux apps they'll have to provide such feature.
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to