Ger Hobbelt wrote:
It is presumed that every SSL_write() requires a flush (at TCP level this
mechanism is called a "Push").  This basically means the data needs to flush
to the reading API at the far end on exactly the byte boundary (or more)
data than you sent.  This mean you have a guarantee to not starve the
receiving side of data that the sending API has sent/committed.  This is
true at both the TCP and SSL levels.

If you think about it the SSL level could not make the guarantee easily if
the lower level did not also provide that guarantee.

^^^^ the guarantee at the lower level is NONAGLE, which is /not/ the
default in TCP stacks as it can result in suboptimal network usage by
transmitting overly small packets on the wire.

Huh... Nagle is to do with how a TCP stack handles when to send a first transmission of data, it comes into play only when the difference between the Congestion Window minus the amount of un-acknowledged data is less than 1*MSS.

http://en.wikipedia.org/wiki/Nagle's_algorithm

The congestion window is the active window that is in use by a connection for the sending side of the connection. The CWnd (as often termed) is between 1*MSS and the negotiated maximum window size (that was made at the start of the connection).

http://en.wikipedia.org/wiki/Congestion_window

The CWnd starts off small and due to the "Slow Start Algorithm" opens up towards maximum window size for every successfully transmitted segment of data (that didn't require retransmission).

http://en.wikipedia.org/wiki/Slow-start

This is a simplistic view (on Slow Start) since many factors such as VJ fast recovery and SACK found in all modern stacks impact Cwnd.


In short NAGLE is to do with reducing latency (at the cost of bandwidth). This has nothing to do with ensuring a flush of application-data so that is appears via SSL_read() at the far end.




So with all that said on what Nagle is, I can tell you Nagle doesn't have anything to do with the TCP Push flag and its meaning.

Here is a possible useful reference lookup the section on "Data Delivery" at the page:

http://en.wikipedia.org/wiki/Transmission_Control_Protocol


In short the TCP Push function is to do with flushing the data at the receiving side to the application immediately, so that is maybe read().



Anyway, using NONAGLE (telnet is **NO**nagle, default socket using
applications use the default(!) NAGLE) on the TX side should, assuming

I am asserting that the TCP setsockopt for TCP_NODELAY is completely unnecessary and potentially bad advice for a cure to getting a flush of application data sent with SSL_write() at the receiver via socket descriptor wakeup mechanism and SSL_read().



(For when you pay attention to detail: note that the TCP-level NONAGLE
behaviour still is timeout based, which often is okay as the timeout
is relatively small, but if you have an extreme case where messages
must be flushed /immediately/ onto the wire while you're using a TCP
stream (so no timeout whatsoever), then you enter the non-portable
zone of IP stack cajoling.

Erm.... NONAGLE does not have a timeout of its own. So I think this is a little bit misleading to say it is timeout based. It is based on either receiving an ACK packet for a sufficient amount of un-acknowledged data or is based on the standard retransmission timer that TCP uses. i.e. no ACK was received before the retransmission timer expires, so the TCP stack goes into retransmission mode. Neither of these things require NAGLE and the timeout in use is a required part of a TCP protocol stack, where as NAGLE is optional.

The NAGLE logic only comes into play for freshly written/enqueued data (e.g. application calls write() on socket), the TCP stack has to decide if it should "send it now" or "queue it up". That is all NAGLE is.

In short NAGLE is to do with reducing latency (at the cost of bandwidth).

"sent it now" means we could be sending only 1 byte of new TCP data but with TCP/IP overhead we might have 40 bytes of header to go with it. Not very efficient use of bandwidth, so this is the bandwidth cost, but we get lower latency as we sent that 1 byte right away.

"queue it up" means we don't sent it now, but stick it in the internal kernel write buffer. This data will get looked at for transmission when either an ACK comes back for previously un-acknowledged data or when the standard retransmission timer expires (no ACK comes back within the time limit). The trade here is that by waiting for one of those 2 events we delay the conveyance of the application data until later. This increases the observable latency for that data.


You can see why the main use of turning nagle off is when the application is a human user using the network socket interactively. The bandwidth cost is the price worth paying for increased productivity; humans hate latency. But if a robot was using telnet it would be efficient and be able to prepare the whole data to send in one go and write() the entire command to the socket in one system call. The robot would not care about having data echoed back in realtime, since it never makes mistakes. yadda yadda.



Sure using NONAGLE with a SSL has its uses, but those uses are when "low latency" is critical to your application. Not for when you require a guaranteed flush of application-data at the receiver side. This is infact an unnecessary concern providing you call SSL_write().



Forgive me I skimmed read the bulk of the rest of your reply, as I found it hard to see the relevance and also hard to follow in a number of places.


Darryl

______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to