On Fri, May 31, 2002 at 09:12:03AM -0400, Trevor Todd wrote:
> I've got an interesting one. I have a simple server/client application set I'm 
>working on and it seems that during a read block on SSL_read if I realize that the 
>clients application has gone I run into a seg fault on that particular read call. 
>When I check the core file the fault seems to be found in a lower function called 
>SSL_readn. I was just wondering if there is a setting that I'm missing or a default 
>timeout value that I can set for the SSL_read to return successfully.

I didn't find a SSL_readn in the source. Stack traces however may get
corrupted e.g. by buffer overruns.
Losing the connection is a standard problem for all network applications
this also includes openssl based applications. If the connection is closed
prematurely, SSL_read returns with a return value indicating the error
condition, so that you can use SSL_get_error to analyze the problem.
I would rather guess that there is something wrong in your application.

Best regards,
        Lutz
PS. Please wrap your lines.
-- 
Lutz Jaenicke                             [EMAIL PROTECTED]
http://www.aet.TU-Cottbus.DE/personen/jaenicke/
BTU Cottbus, Allgemeine Elektrotechnik
Universitaetsplatz 3-4, D-03044 Cottbus
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to