this issue has been resolved... or at least i've pinpointed the problem. there's an inconsistency between how my win32 0.9.7d and my os x 0.9.7d deal with timeouts.

on os x, timeout doesn't begin timing inactivity until i've started to send data. until i begin sending, it waits, blocking at the SSL_read() line. once i start, so does the timeout timer.

on win32, it starts the timer immediately after the handshake (or thereabouts).

the timeout i'm talking about here is a sockets thing, not an openssl thing, i know, but before i introduced openssl to the mix, the timeout code functioned a-ok on win32. SSL_read() is supposed to acknowledge socket options which were set using setsockopt() and treat those options consistently between platforms, right?

if anyone can offer me a suggestion about how to get timeouts to behave like i've described here in winsock2/openssl-0.9.7d, please do.

thanks.

- philip


______________________________________________________________________ OpenSSL Project http://www.openssl.org Development Mailing List [EMAIL PROTECTED] Automated List Manager [EMAIL PROTECTED]

Reply via email to