David Schwartz wrote:
Honestly I think a better job of mimicking TCP's semantics is preferable.
OpenSSL should be able to handle its own send buffer flushing and receive
buffer filling, in the background without me having to ask it, just like TCP
does.

This can easily be done with OpenSSL creating its own service threads. For
single -threaded builds, it can also be done, it's just not as elegant.

This implication of at least one other thread of execution to manage this; not all platforms or applications would want (or can do) that, embedded devices might not have thread support.

However it should be possible to build this model on top of raw building blocks provided by the outline in my previous email, so to me its a moot point, the building block primitives should come first and allow the application programmer to decide what paradigm he wanted.


Another example is OpenSSL prohibiting a concurrent SSL_read and SSL_write
on the same connection. Yes, this is consistent with what other libraries
do, but it's not what TCP does.

Ah but there is a stalling operation needing a round trip that causes it, TCP does no have that. You have to accept is how SSL works, you can paper over it if you implement your kernel-socket-like paradigm on top but at the end of the day you may end up having to make another copy of application data into a buffer to provide that and some people won't want yet another copy since it will show up in their performance figures.


I think a library should be capable of making SSL look as much like TCP as
possible and that this should probably be the default behavior.

I agree the compatible model should exist, the jury is out on the default behavior. Your kernel-socket-like paradigm should be just a usage case but it should not be the only usage case. I'm not in favor of insisting on extra complexity if there are raw primitives to be extracted and reused from within the many I/O models that exist.

I'm currently taking a major look at the Windows I/O completion (from your pointer a week or so ago), my windows stuff is client / simple server code but I already handle OVERLAPPED I/O so the jump isn't that far.

One interesting performance case is there any margin to be gained from having application space hand over the data buffer (like for OVERLAPPED) to the next layer down. But this buffer has a known amount of headroom (at the start) and tailroom at the end and all the alignment issues worked out. Then have the SSL layer build the packet around it and perform the transform over the top of the plaintext. Not directly over the top but at least a cipher block size away, are there any benefits to cache localization and memory thrashing that would show up in performance figures.

I wonder if FIPS has anything to say about this.

My point of view here is for recursively stackable I/O layers without copying data when its not necessary. Most I/O paradigm's are the same its just the lifecycle of the data and timing of the events that is different.


Darryl
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       openssl-dev@openssl.org
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to