David Schwartz wrote:
That is not only not implemented by any known implementation but quite
literally impossible. Please tell me what implementation guarantees that a
TCP 'write' after a 'select' hit for writability will not block.
This is no use, your asking me for references and I'm asking you for
references. I don't really have the time to entertain this issue as
much as would be necessary; so I've given over to you the job to point
out the code path within a published kernel that does what you describe.
So all thos implementations that give a 'readability' hit on a listening
socket and then block in 'accept' if there's a subsequent error on the
connection don't exist?
Okay granted there is an accept() concern on some platforms, that if an
"embryonic connection" is reset some operating systems might kill the
socket between the select() and the accept() call. While others will
persists with providing the socket to application, only to then return 0
from read() (End-of-stream) soon after.
I've not been so concerned with server specific API calls, its really
socket(), bind(), connect(), read(), write(), close() that most people
care about, in particular just the read() and write() parts.
Umm, Linux does this. That's what caused the inetd denial-of-service attack.
It's precisely people following the bad advice you are giving that created
this problem.
Please cite your references, either discussion on the issue or direct
references to the code within the kernel this causes this behavior. Do
the affected kernels still work this way in their current versions today?
Maybe he is confusing tail
queue dropping with head queue dropping, where the head is the front of
the queue which the application sees next and the tail is the end of the
kernel's buffer space.
It doesn't matter what you drop.
Well it does, as the disappearing readability condition only occurs when
you:
* Drop all data for SOCK_DGRAM (so there is nothing left)
* Drop the head data for SOCK_STREAM (so the next byte in the stream
is no longer there)
Tail dropping is a normal occurrence it happens when the kernel is
overloaded with data and is permitted. Tail dropping does not affect
the select() event mechanism and can be a result of the network/kernel
queuing up data faster than the application is pulling it from the socket.
For TCP is it not application data that is tail dropped, but TCP
segments, i.e. once data has been acked to the kernel _MUST_ retain it
because the sending is never going to resend it again, but by dropping
TCP segments you make the sending end take responsibility to retransmit
that data you dropped. The kernel is free is discard entire TCP
segments when the read memory quota is exceeded for a given socket,
bearing in mind that TCP is only designed for < 3% dropped packets so
this should not be habbit forming.
For UDP the datagrams are queued in the socket, when there is too much
data queued it may start tail dropping the new segments that arrive.
The kernel may also prune the queue but it will never prune all packets
such that there is no queue what so ever. Because there is always a
reservation of at least one packet then select() event notification
won't be broken.
The same is true for writability, once the low water mark triggers
clearance then it will persist as the low water mark guarantees space
for at least 1 byte (which is enough for write not to block).
The challenge here is for someone to point out by way of pointing to
published source of a BSD socket implementation that will remove all
packets from the socket queue after having already advised a readability
indicator to the application.
Huh? You can't get a guarantee out of the lack of documentation.
No but you can out of proven implementation. This is what I'm asking
for a citation on.
Lack of a specifics point in specification's does not mean than
implementations don't do things. It can just mean that the broad scope
the specification was to encompass needed to be vague to allow for
things like socket(SOCK_MY_BESPOKE_PSEUDO_STREAM, 0, 0) which was also
an experimental option later for vendors.
The BSD socket mechanism is generic supporting many protocols, but for
the purpose of this discussion it is TCP that is of interest to most
uses of SSL.
Unfortunately, on Windows, you have to be able to talk to multiple socket
implementations. There is no guarantee that a Windows application only has
to deal with the Windows native socket service provider. Firewalls, for
example, often interpose their own LSP. These LSPs have been known to have
slightly different semantics from the Windows native ones. There are known
firewalls that decide which UDP packets to drop at receive time, for
example.
Please cite your references. Surely given that Microsoft wrote their
own specification on their own socket layer and a citation has already
been provided in this thread making it crystal clear then anything
different might be considered a vendor specific bug. This would depend
is the cited statement was lifted from the Microsoft specific socket
documentation or the device driver / SDK documentation.
Darryl
______________________________________________________________________
OpenSSL Project http://www.openssl.org
User Support Mailing List openssl-users@openssl.org
Automated List Manager [EMAIL PROTECTED]