>From: owner-openssl-us...@openssl.org On Behalf Of Dogan Kurt
>Sent: Friday, 29 June, 2012 15:14

>Hi, i am developing a client app with openssl. I use SSL_read 
>and SSL_write in blocking mode, i just cant figure out something 
>about them, if server sends me 10 kb and i call SSL_read just 
>once, can i assume that i will receive all the data at once. 

>I use simple recv call with that classic approach, should i 
>use SSL_read in this way?
<snip: typical recv-until-full-or-EOF-or-error>

Maybe.

For plain TCP in the wild (not on a LAN of your own systems) 
this is really needed. Not only can either endpoint fragment, 
but increasing numbers and types of middleboxes can also.
Basically if your application doesn't keep reading until 
it has 'enough' data -- whatever that means for your app, 
it may be everything (as you coded) or it may be less 
using protocols like HTTP, SMTP, FTP, etc. etc. -- 
then it won't reliably work on the Internet.

SSL/TLS formally promises only the same stream behavior 
as TCP. But SSL/TLS is implemented using records which 
must be (decrypted and) authenticated as a whole, and IME 
most implementations (including OpenSSL) send one 'write' 
as a record if possible, and receive one record as a 'read'.
Thus only the sender can fragment not the network; while this 
can still happen, it is easier to determine and sometimes 
to control. First, SSL/TLS defines a maximum record of 16K-1, 
so any data larger than that MUST be fragmented. *If* your 
10k *stays* 10k and doesn't grow with new users or application 
features etc., you're okay on this front. 

But even below the maximum, a sender can choose to fragment. 
In particular, Microsoft for one 'solved' the BEAST attack 
last fall with a change (MS12-006) which splits data after 
one byte. This defense is only useful for CBC suites before 
TLS 1.1, but I haven't tested if the implementation limits it 
that way. And I don't know if it applies to servers as well 
as clients, although browser semi-shared execution environment 
that allowed the BEAST attacker to send adaptive plaintext 
applies differently if at all in a server. During that time, 
I saw discussions by other implementers about doing the same 
1/(N-1) method but I don't know if any did. OpenSSL years ago 
implemented a 0/N approach which does not fragment records but 
reportedly does or at least did cause interop problems with 
some implementations that mishandle empty records.

To be on the safe side, and robust to future changes in either 
your application or OpenSSL or other SSL implementations, do 
the read loop. It's usually easy enough.

On the write side, OpenSSL by default (unless you set an option) 
will always write (and return) full count, even if this involves 
multiple records, unless 'error' -- which includes EWOULDBLOCK 
if you use nonblocking; but if you're not using nonblocking for 
TCP then I suspect you probably won't for SSL either.


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to