Even reducing the thread stack size didn't help. I observe that the thread
creation as such is not a problem. I create about 1000 threads , delay in
each thread the SSL_connect for about 10 sec.
Once the delay expires and each client make connections to the server the
seg fault occurs.

Regards,
Prabhu. S

On 10/17/07, David Schwartz <[EMAIL PROTECTED]> wrote:
>
>
> > > This is really one of those "don't do that then" things.
> > > Thread-per-connection is well-known to break down at about 750
> > > connections.
>
> > Just curious at how the number 750 was calculated or deduced. And
> > is this a linux-specific limit?
>
> On Windows, it's usually more like 800 on older versions and 1,200 on
> newer
> versions. On Linux, it's usually around 700 if you don't monkey with the
> thread stack size and around 1,000 if you do.
>
> > Also, isn't this limit dependent on the number of available
> > CPUs/cores and system
> > memory?
>
> You would think so, but it doesn't seem to be. It depends upon exactly
> what's causing the limit and usually that's something architectural rather
> than something that scales.
>
> For example, with Linux, it's often address space. That won't be an issue
> on
> a 64-bit OS, but on a 32-bit OS, more cores or memory won't change it. On
> Windows, it's often architectural limits on how much memory can be locked
> for I/O or how many events can fit in the process' queue. Most likely any
> reasonable machine will already max those limits out, so more memory won't
> increase them.
>
> A user-space threading library might change the dynamics. But I wouldn't
> bother -- thread-per-connection is just wrong for too many reasons.
>
> DS
>
>
> ______________________________________________________________________
> OpenSSL Project                                 http://www.openssl.org
> User Support Mailing List                    openssl-users@openssl.org
> Automated List Manager                           [EMAIL PROTECTED]
>

Reply via email to