Hi forum, I would appreciate comments on if what I am trying to do is feasible.
Problem: I am using recent version of openssl to encrypt large amount of data being transferred between client/server. I have used the BIO interface to abstract the transport, such that different protocols (TCP, UDP...) can be used transparently to SSL. That worked fine but I found the transfer speed after the encryption is about 5 times slower than no connection, when using TCP on the 10 Gbps network. The underlying Linux OS is tuned such that TCP send buffer is set to 4M bytes. In order to improve the transfer throughput, I implemented an "application data accumulator", i.e., in the writer/sender callback function of BIO object, Instead of sending out an SSL encrypted data chunk (about 16k) right away, I accumulate them in a buffer, and only send out the content in the buffer when the buffer is about full or no SSL_write() has been called for a while, for the last bit of data. I found that when sending accumulated data for the first time, the receiving side SSL complains "Bad record mac" and resets the TCP connection. Without in-depth knowledge about SSL/TLS, I wonder if what I am trying to do will break some SSL data integrity check, and is possible at all. Thank you very much ______________________________________________________________________ OpenSSL Project http://www.openssl.org User Support Mailing List openssl-users@openssl.org Automated List Manager majord...@openssl.org