I'm implementing code to do OpenSSL handshake/read/write
for some radically different hardware.  These will completely
replace the standard OpenSSL handshake state machine
and most of the API functions at the SSL_METHOD level.

I am used to I/O return codes where
  ( > 0 ) means success
  ( == 0 ) means I/O block but otherwise no problem
  ( < 0 ) means error

So far, I've implemented all my routines to work that way.

According to the manpages for the ssl(3) API functions,
OpenSSL works a bit differently
  ( > 0 ) means success
  ( == 0 ) means error
  ( < 0 ) means error

In both cases the manpage advises calling SSL_get_error()
to find the reason.

What am I missing here?  There's probably a good
reason that a return code of zero doesn't simply mean
that I/O is blocked... but I've pored over the code and I
just can't figure out why.  Perhaps it has something to
do with blocking vs. non-blocking.  Somewhere, maybe
in the manpages or in ekr's book, it implied that it was
to aid the application programmer who wasn't used to
zero return values.

Can someone help me with the subtleties here?

Also, does this behavior change for blocking sockets
vs. non-blocking sockets?


Tom Biggs
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to