Hmmmm... If it is taking your system 5 to 7 MINUTES to process 1000 connect
/ disconnect cycles, then something is very wrong.  

I would have to rerun my tests, but I am thinking that I was doing > 1K
connect / disconnects in about 10 to 15 seconds when running both server and
client on a single core P4.  Perhaps a little faster using several client
instances at the same time, although the performance maxed quickly on a
single core CPU.  I believe it was much faster on a 4 way Xeon machine I
tested it on. I can get more specific stats for you, if you want them.

But, whatever my specific results were, 5 to 7 MINUTES is just WAY off.

1. What kind of machine is it?  Commodore 64?  TS-1000? TRS-80?  Just
kidding... ;)

2. Is your client class on the server initiating a bunch of additional
processes, like database lookups or something?  

3. Do you have any problems with other apps on your system running slow?
Perhaps you have a bad driver or resource conflict with your NIC? 

Just some thoughts...


-----Original Message-----
Sent: Wednesday, November 28, 2007 4:30 PM
Subject: Re: [twsocket] TWSocketServer and backlog

    Thank you for your very informative response.  I
was performing some tests on my server application by
continually increasing the backlog value with some
mixed results, which seem to coincide with your
empirical analysis.

    I kept increasing the backlog value up until I
reached 1000, but to my surprise, I noticed that the
connections started failing after about 230 requests,
out of 1000 clients.  These were the first 230
requests, so the backlog queue was significantly less
than its maximum.  I also thought I noticed that the
server was taking longer to respond, but didn't think
much of it at the time.

    However, after reading your post I decided to try
once again with a backlog of 5, and set a retry loop
every time a connection failed.  As expected, the
connections started failing almost immediately after
the test started.  But much to my surprise, the
connections were handled quicker -- sometimes orders
of magnitude faster than before!

    As a reference, using my localhost as the server
and client, with a test application spawning 1000
clients to connect one right after the other, and
re-trying if they failed, it took about 5 to 7
minutes to process the entire lot; while it only took
about 2 minutes to process with a backlog of 5.  The
test with a backlog limit of 5 retried much more
times, of course, but when connections were
established, they were processed faster.

    Still, it seems to me that TWSocketServer is
taking too long to process incoming connections, as
many connections can be queued in the backlog while
its instantiating the client and dupping the socket.
 Any thoughts on this?


>------- Original Message -------
>From    : Hoby Smith[mailto:[EMAIL PROTECTED]
>Sent    : 11/28/2007 5:31:09 PM
>To      :
>Cc      : 
>Subject : RE: Re: [twsocket] TWSocketServer and backlog
 >FYI... I ran into an issue with some test code I
wrote a few months ago,
which related to the backlog setting, as well as the
annoying issue with
Winsock running out of local ports.  In my test, I
was attempting to see how
many connections could be handled by a particular
process over a period of

I believe my results showed that increasing this
value can have a very
negative effect on performance.  Basically, the issue
is inherent in how the
TCP stack is implemented, not in how a particular app
services the stack.  I
found that surpassing a particular connection rate
threshold would result in
an exponential gain in processing time on the
listening stack.  Meaning, the
TCP stack performance decreases dramatically as you
increase the number of
pending connections, when the listening socket is
receiving a high rate of
connection requests.  My assumption is that this is
due to the increased
overhead in managing the backlog queue.  Given this,
I made two
observations, which may be wrong, but made sense to me.

First, this is why the Winsock default is 5.  I
imagine that the Winsock
stack implementation was designed with the
perspective that if the backlog
is actually filling up enough to reach 5 or more,
then something is wrong.
Probably, a couple more might be ok, but my results
showed that as you
increased this value under heavy load, your
connection rate was very
unpredictable, as well as instable (lots of failed
connects).  For the
TCP/IP stack to be effective, it must be responsive
enough to handle the low
level connection requests in a timely fashion.  If
not, then you have a
major low level servicing problem or the machine is
seriously overloaded
with TCP requests.  In which case, you want to get
connection errors, rather
than an overloaded backlog scenario.

Second, increasing this value surely creates a
greater DOS attack surface,
making you more vulnerable to bursts of socket open
requests, and surely
would make the effects of such an attack even worse.
 This might also be why
the Winsock default is 5.  However, as I personally
don't think that there
is really a practical solution to a well designed DOS
attack, then this
might not really be relevant.  Nonetheless, it might
be something you need
to consider.

So, given that, I personally don't recommend
increasing the value.  If your
app can't service the stack with a backlog setting
close to 5, then your
system is just overloaded or not responsive for some

Anyway, that is what I determined from my testing
results.  If anyone has
found to the contrary, please feel free to correct
me... :)

-----Original Message-----

To unsubscribe or change your settings for TWSocket mailing list
please goto
Visit our website at

To unsubscribe or change your settings for TWSocket mailing list
please goto
Visit our website at

Reply via email to